Introduction

I was a self-hosting hobbyist and enthusiast way before I got to do it professionally.

Usually, any questions from beginners I see, lead with: ‘Is there an absolute beginner guide?’ As I’ve stated many times: that depends on what you are looking for!

  • Is there a secret fountain of knowledge on the internet? No.
  • Is there a genie that can answer all your questions correctly and instantly? No.

Obviously, that’s fantasy (even though Google does exist), but I still asked myself: what could they actually mean? So I thought about it… and maybe they just mean: “How do I start?"

So I’ll write exactly that. In this “super beginner” self-hosting document, I will tell you everything required to know to host anything!

What is self-hosting and why you should do it

As a hobbyist, you have to understand that self-hosting is mainly a decision to disconnect yourself from massively used online services, usually owned by big corporations. In an effort to empower your privacy, your personal economy and your IT knowledge as a whole. All of it depends on your technical skills and chance to ‘host’ some services (or software) on your local or ‘private’ cloud machines.

The ‘why’ depends on you. Why are you interested in doing this? If we take self-hosting only by its merits:

  • Privacy
  • Ownership
  • Choice

You get the privacy of not being spied on by corporations, the sole ownership of your files and the choice of only being limited by the boundaries set by the service you are using and the technical knowledge you hold of what you’re trying to do.

For example — by using Google Drive instead of your own Seafile, you are limited by Google’s decisions, by Google’s TOS, by Google’s pricing, every file you upload will be on Google’s servers, and they will not be private regardless of how much ‘encryption’ they tell you they use or how much trust you put in them.

So, in short: ‘why’ is something you have to answer yourself. I will give you the good and the inevitable bad, but this is not a question anyone else can really answer.

When and why you shouldn’t self-host

This is a topic that I am still discovering daily. And you will too. Some people have particular things they recommend you don’t self-host (example: emails). Some people don’t trust themselves to host sensitive services (example: password managers, backups or cloud storage) and, some people just do not know what they are doing or don’t want to learn. When you consider these people ‘archetypes’, you should be able to form a better idea on the scope of YOUR responsibilities when it comes to self-hosting.

You don’t answer to anyone when it comes to self-hosting… but, that also means you are the sole architect of your own demise. It sounds very dooming, but it’s the truth.

For example:

  • If you host your own Seafile installation and you use it daily as your cloud storage, let’s say you upload some precious photographs, but your server goes under, then you’re fucked.
  • If you’re hosting your own Bitwarden, and there are ALL your passwords in it. But you don’t know how to secure your network, or use proper encrypted channels or even just don’t update it ever, then you’re also fucked.
  • If you host your own personal email, but you don’t know how to properly set up your DNS, your mails are NEVER gonna get to their destination. And your domain/IP is going to get blacklisted by Microsoft and Google.

It goes without saying, that failing to understand this segment of the document means that you invalidate the other two points I made about self-hosting. Which are ownership and privacy. You might lose your files, they might get stolen, they might get leaked. And this is only using one example.

Self-hosting is not a joke, it’s a serious hobby that is extremely rewarding and fun to have, but it needs time and passion.

Local vs. Remote

Now we can look at the first decision you have to make.

The benefits of hosting locally versus remotely are many. A few notable ones should be enough for anyone to make a rational decision, and they all rotate around a few key concepts.

  1. Cost
  2. Reliability
  3. Privacy

Now, the big one is obviously cost. Keeping a server on 24/7 can quickly become a sizable investment, depending on what hardware you’re using.

An old repurposed desktop PC will inevitably cost you much more in energy than one Raspberry Pi 4, so a decision between required performance and cost is something you have to consider. Generally speaking, one Raspberry Pi 4 (starting from the 4GB RAM version) is more than enough as it can run pretty much anything.

One option is renting a cloud instance on AWS or Digital Ocean or a dedicated server on Hetzner or Online.net. This obviously gets rid of many ‘issues’, and in some cases, it might lower the overall cost you will have to bear.

When it comes to cost, it just depends on your wallet. Obviously, there are many more advantages to having hardware close to you, like absolute privacy, faster and personal maintenance of failures and peace of mind when it comes to bills since it’s all internet and energy anyway.

Following this comes the second big point, which is reliability. The drawback of hosting hardware yourself is that you are the sole arbiter of your own demise. (as I explained earlier.) Having a server hosted locally means that you have to care for its longevity and reliability. You will have to act on making sure it stays online 24/7 and that it does not fail, using tools like an UPS.

With a remote solution everything will be ‘taken care of’ and, that should give you peace of mind. But it means that you lose the power of choice. You have much less upgradability space for your future, and switching to something more powerful can be a major pain in the ass.

The third point is much more obvious. If you don’t take the necessary steps (when you can), obviously everything that runs on a rented server is not privacy-safe. Your files aren’t closed to you and you cannot know for certain who has access to them, unless you encrypt them.

Hardware and use-case

Use-case as a point in this document is situational but it’s vital to learn not to overshoot or undershoot your need.

Simply put, one time in my life, I had a single Raspberry Pi 4 4GB that handled:

  • Docker (running containers the likes of Sonarr, Radarr, Jackett, Beets, TheLounge)
  • PiHole (which also was my internal DNS)
  • Node-Red
  • NGINX
  • PHP-FPM
  • MySQL
  • Redis
  • Grafana
  • Telegraf
  • InfluxDB
  • qBittorrent

I do not recommend you to do the same, but it can be done. It boils down to how lean you can stay and what the use case for everything is. Most of what I do is personal. The only thing that is not personal is this website. If you expect a lot of computing or ‘traffic’, you have to carefully resize your hardware needs.

So, if you’re considering staying small and private, you don’t really need to pay 30/40 Euro a month to rent a 16GB RAM server with an old Intel Xeon and 4TB of HDDs, just buy a Raspberry Pi 4 and handle everything there.

When you need more ‘power’, there are a lot of very good options out there. It comes down to cost, so it’s something you will have to figure out. I will leave a resources chapter in this document so that you can check out everything yourself.

Bottomline is, use whatever your wallet allows you to use.

Introduction to basic essentials

DNS

“Domain Name System” (DNS) is the internet translator. It exists because humans aren’t designed to remember numbers, but computers don’t interact via words. So what gives?

Every website, every service you interact with while online is actually just an IP number. Imagine if you had to remember the IP(s) of every website you ever go on. Try right now, on your browser’s address bar, go here: 142.250.180.78.

This is why the DNS exists. In its most basic form, it translates www.google.com into 142.250.180.78 (and viceversa).

Let’s imagine that your request (or query) for www.google.com is a package in your car. You know you have to deliver this package to get your reward or open the web page. You know the person’s name, but you don’t know their address.

Now imagine that down the road from your house, there’s an information centre… where you can stop and ask for information. These centres are called “nameservers”. Each of these places stores information about domains and what their address is, but they can also talk to other centres. So you go inside, and ask the first “nameserver” you encounter: “Hey, who is www.google.com?” but they probably wouldn’t know; and when they don’t, as you did, they will have to go and find who is the authoritative “nameserver” for the domain you asked about, and that one will reply with the answer: “Ah yes, I know www.google.com they are mine and their address is 142.250.180.78”.

So, for example: If you have 8.8.8.8 configured on your PC (as many people do) and you go look for www.google.com on your browser, the request for the IP address is direct, because 8.8.8.8 is one of Google’s nameservers. But if you had 1.1.1.1, which is Cloudflare’s nameserver, they couldn’t tell you what the IP number of www.google.com is, because they don’t own that DNS zone, they aren’t authoritative. And so they go find who is, simply by finding out who the nameserver for google.com is, because every domain DNS zone needs to have one, and then ask them directly for the IP address, which they will share, if they can or want to.

Now, in reality, the process is a lot more complicated. What is called a “DNS zone” is in its essence, the DNS contents of “www.google.com". Containing all “DNS records” that the zone stores on its authoritative nameserver(s). Then, the “DNS zone” is also split into two parts. The “external view” contains all public records that people can use for various needs. And the “internal view”, which defines private IPs, that people INSIDE the domain google.com can contact to access services that can only be accessed from inside Google’s own internal network.

I have written a DNS Record Handbook. I encourage you to read and learn what any specific and commonly used DNS records do and how to use them.

One case where knowing DNS records is of extreme importance is when you host your own mail server. The mail protocol and process is deeply rooted within DNS. It requires you to properly configure several records, like the MX record, the A record and various TXT records that contain DKIM, DMARC and SPF information, all to identify who is the actual sender of any email and if they are authorized to come from that particular server.

HTTP/S

“Hypertext Transfer Protocol” (HTTP) is part of the internet protocol we use to distribute data information that includes hypertexted, hyperlinked documents and resources that can be accessed within a web browser.

I know, it sounds like jargon. Even I don’t know how it technically works that well. It’s been an ever-present tool on the internet, nobody ever questions it. Companies like Google even like to HIDE it from people’s browsers.

Have you ever noticed? When you go on https://www.google.com on Chrome, it removes the https:// part (and even the www but I agree that should not exist, sue me.).

The http:// part is how we tell the browser what protocol we are using to communicate with what comes after it. It comes in flavours!

http:// is the standard protocol. It defaults on port 80 and is insecure while its ‘newer’ flavour https:// is the ‘secure’ version of HTTP. It defaults on port 443 and is protected by a chain of security comprised of certificates and keys.

I won’t stop much on this chapter, it becomes extremely hard extremely fast, and my knowledge would fail me. Suffice it to say that you only care about how this works (in the scope of this document) when you get into web servers and reverse proxies which we will later. And still, particularly HTTPS will only be tangentially relevant. Most of it will be about how to generate certificates and how to use them, to serve HTTPS connections.

Introduction to software essentials

Operating Systems

1. Introduction

This choice is largely dictated by your experience and preference, and while I do not enjoy gatekeeping I absolutely must recommend not using Windows Server, for the following reasons:

  • It’s actually more complicated than Linux
  • You’d miss out on the majority of ‘self-hostable’ software
  • It’s not free

Obviously anything in this document is written with Linux in mind, so if you were planning on using Windows Server, you’re on your own.

2. Debian vs. RHEL

Now, while I mainly use CentOS as the server distribution of choice at my daily job; at home I actually use Debian based distros, Ubuntu Server to be precise. Which I can recommend.

Debian vs RHEL is a simple decision of my personal QoL, I’ve found through the years that CentOS (being an enterprise solution) requires much more work to do simple tasks like self-hosting a torrent client. As this is a beginner document I will assume you’re going for Ubuntu Server, as going into why specifically I don’t recommend CentOS would not make sense in this context.

3. For the Raspberry Pi

In my opinion, when it comes to using the Raspberry Pi you have two OS options:

  1. Ubuntu Server (for ARM)
  2. DietPi

With Ubuntu Server you’ll have to do everything manually, but at the same time you get a standard operating system that you have much more control over. It’s a solid option, and usually it’s what I pick. Really not much more to say.

DietPi is a very good balance between a ‘skilled’ user and a user-friendly approach to a Debian based operating system. (I think DietPi is literally Debian with a bunch of ease-of-use scripts). If you’d like to experience an SSH terminal and have more access to the applications you’re going to install, then this is the option for you. DietPi will let you install most software through a simple terminal GUI (graphical user interface) and it’ll do everything on its own. At the same time, it will let you access the OS shell, so you can basically do whatever you want on it, so long as you don’t mess with DietPi itself.

4. On Cloud instances

All cloud instance providers, be it a dedicated server or a VPS will let you pick your own operating system. Everything I’ve written so far applies here too. I will list every single service I’ve ever used in the references section of this document.

5. A warning about interacting with your OS

Cue the rant: I’ve seen many people recommend Webmin or any other web-based managing interface. Some people go as far as recommending others to install a VNC server on a server distribution, so they can have a desktop.

I want to be clear here: those things are illegal.

I absolutely must compel you to learn how to properly use SSH, so please continue to the following section and FORGET about the things I mentioned and AVOID people who recommend them to you.

SSH

First of all: read the first paragraph of this Wikipedia page.

Obviously, explaining what it is technically is out of the scope of this document, so let’s just say that SSH is the way we securely connect to a server’s terminal (or shell). It’s a cryptographic network protocol, meaning that it’s secure even through the internet.

SSH has a standard port of 22, and unless changed this is what will be used to allow for incoming SSH connections. If for some reason that port gets blocked or closed, you can wave goodbye to your system, unless you have other ways of accessing it. Keep this in mind.

Web servers and Reverse proxies

Generally speaking, most web servers also have reverse proxy capabilities. Their logic is also similar, they just differ in practicality. To put it simply, they are software that you can use to serve a service, or specific content directly to the internet or a local network.

For example, in the case of a web-server, it could serve a picture from a directory in your server or in the case of a reverse-proxy, it could serve web access to a service running locally on your Docker host.

They are (usually) the point in a machine that receives requests from the internet and work to rely back information to browser clients.

Get into tinkering shape (on Windows)

In this chapter, also divided into sub-chapters, I will explain in a step-by-step basis, a complete workflow to achieve an exemplary result.

A Windows slave workflow

If you are a Windows slave, like me, this is the section for you. In this sub-chapter I will give you tips, tricks and things you MUST HAVE to ease the incredibly excruciating pain of having to use Windows because you want to FUCKIN’ PLAY VIDEOGAMES AND RUN DAZ3D AND—

2. WSL (Windows Subsystem for Linux)

Docs

The best way to do most of the things I will explain later on, is to not use Windows. So instead, we will install WSL 2.0 on our system. It’s essentially a Linux virtual machine that runs deeply integrated within your Windows installation. You will control it via the Terminal I will ask you to install later in this sub-chapter.

To install WSL 2.0, follow these steps:

  1. From Start, type Powershell and open the Windows Powershell as Administrator
  2. Once open, in the terminal type wsl --install
  3. It might ask you to reboot, please do so.

This will install all the requirements you need to run WSL and a default distribution of Ubuntu 20.04, which we like.

WARNING: If for some reason it won’t install Ubuntu by default, go on the Windows Store and install Ubuntu 20.04 from there. (Yes, really.)

3. Chocolatey

Docs

Let’s pause WSL for a minute now, and proceed instead with installing Chocolatey which is a package manager for Windows, similar to apt on Debian-like distributions. As you can see there’s a pattern, and that pattern is making sure Windows isn’t Windows but actually Linux.

To install Chocolatey, follow these steps:

  1. Open PowerShell as an Administrator
  2. Send the command: Get-ExecutionPolicy if it returns Restricted then send: Set-ExecutionPolicy AllSigned and press A for the Yes to all option (you should read the description on what this does)
  3. Send the command: Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1')) (Or copy it directly from the documentation page I linked above, why are you trusting a stranger like me, you dumb fuck)
  4. Once completed, use choco -? to verify it installed correctly and read the help output

You are now free from double-clicking and downloading installers from the internet. Congratulations.

Just run:

  • choco find any-software-goes-here to search for software you need
  • choco install any-software-goes-here to install said software
  • choco upgrade all to update all software you installed with Chocolatey
  • choco uninstall any-software-goes-here for when you get tired

Essentially what Chocolatey does is manage installations of any of the software you ask it to install. They will be installed in a location that contains all of their dependencies which is entirely known to Chocolatey, so that when you uninstall them, they won’t leave garbage behind and you wondering why your fucking Windows installation is slow. Stop sending me support emails.

3. Windows Terminal.app

Docs

Windows Terminal is the quintessential terminal application everyone on Windows MUST have. It’s crazy how good it is. Maybe because it’s like any of the terminal applications we have had on Linux for decades.

To install Windows terminal click here and then install it from the Windows Store. It will automatically pick-up whatever console is accessible by the system, so it should include CMD, PowerShell and any and all WSL distributions you have running. (And Azure Cloud Shell, for the 5 people that use it)

4. PowerToys

Docs

This isn’t required, but extremely helpful. I invite you to go read the documentation above and learn what they do.

To install PowerToys simply do choco install -y powertoys from an elevated (opened as Administrator) PowerShell.

See in just 2 minutes you’ve already used Windows Terminal and Chocolatey for something extremely useful.

SSH, the newbie’s worst enemy

Now that you (hopefully) have Ubuntu installed in your WSL, you can open it on your beautiful Windows Terminal app that you (hopefully) just installed and we can start tinkering like proper systems administrators and not fake system administrators like the ones on Windows.

5.1 Generate your SSH Key

On Ubuntu, run the following command:

  1. ssh-keygen -t rsa -b 4096 -C "your_email@domain.com"
    
  2. When it asks for a passphrase, put a passphrase on and save it on your password manager. (If you don’t have a password manager, reevaluate your hobbies)

It will ask you where you want to save it, it should default to /home/your_user/.ssh/id_rsa. I recommend not calling it id_rsa but give it a different, recognizable name.

Once that’s done, you should have your key pair. that is a private key which you must absolutely never give to anyone and not put it anywhere besides your own systems. And a public key, which will be stored on any and all servers you will want to access.

Before doing anything else, make sure that /home/your_username/.ssh has 700 permissions and the key you just created with 600 permissions. To do that, follow these two commands:

chmod 700 ~/.ssh/ && chmod 600 ~/.ssh/*

By default, the SSH demon will want you to be as private as possible with these two files. So 700 permission means that only you can access and interact with the .ssh folder and 600 permissions means that only you can interact with the keys themselves.

5.2 Using your new shiny keys

Now that you have your keypair, we can learn how to use them properly.

First and foremost, we need keychain so install it:

  1. sudo apt update
  2. sudo apt install keychain

Keychain stores in its session whatever key you add to it for the entirety of the session. Meaning that you input your passphrase once when you add your key and never again until you log-off or reboot your system.

To make sure it works, do the following:

  1. sudo apt install vim
  2. vim ~/.bashrc
  3. SHIFT+G on your keyboard, to go at the end of the file
  4. i on your keyboard to enter edit mode
  5. add eval "keychain --eval" at the end of the file
  6. ESC on your keyboard, to exit edit mode
  7. : on your keyboard, to enter command mode
  8. wq after : to enter the write w and the quit q command, to save and exit the file

Or, you can do it with nano ~/.bashrc like a noob. What this process does is make sure that keychain is running once you open WSL and you’re ready to SSH into your servers.

Now, you have two choices:

  1. Add the key manually the first time you need it
  2. Make sure the key is added automatically at the first WSL start

For the first method, do the following:

  • ssh-add ~/.ssh/your-private-key

This for every key you have, or for multiple keys:

  • keychain --eval --agents your_private_key_1 your_private_key_2

For the second method, add this last command to the end of .bashrc like we did just a second ago.

To end it all, make sure you have a config file inside ~/.ssh with the following contents:

AddKeysToAgent yes
IdentityFile /home/your_user/.ssh/your_private_key_1
IdentityFile /home/your_user/.ssh/your_private_key_2
IdentityFile /home/your_user/.ssh/your_private_key_3

5.3 Configuring and connecting to your server

Now that you have your keys configured, we can use them to configure a server to accept SSH connections from your keys exclusively.

I will assume you already have means to access a server. Usually, when you rent a server they will ask you to paste in your public key so that when the server is generated, it will already have SSH configured for you to get in. Otherwise, just get in with the password they give you for the root user.

  1. Create the user adduser --disabled-password username
  2. Access the user’s shell su - username

What discriminates who can get in your server, is your user’s home and .ssh folder and the authorized_keys file in the server itself. You could use the root user folder for this, and connect directly to root every time, but I absolutely do not recommend this. There are far better methods to become root after you’ve securely connected to your regular user.

Check if you have authorized_keys inside ~/.ssh by sending the list command:

ls -la ~/.ssh

If you don’t, then create and configure it:

  1. vim ~/.ssh/authorized_keys
  2. : to go into command mode
  3. SET PASTE to enable paste mode (and press enter)
  4. i to enter edit mode
  5. and paste in your public key in here
  6. :wq to save and exit

Make sure that authorized_keys has the right permissions:

chmod 600 ~/.ssh/authorized_keys

Now exit the system and try to SSH back into it, with your own SSH key and your new user, instead of root:password. If everything’s successful, you can move onto the next part. Otherwise go on Google and figure it out.

5.4 Securing the server

Now that you can use your SSH key to access the system, we can think about securing it:

  1. Install ufw if it isn’t sudo apt install ufw
  2. Allow port 22 with sudo ufw allow 22/tcp
  3. Enable ufw with sudo ufw enable

Now you have a working firewall where you can dictate which ports are accessible from the outside. Obviously, if you have another firewall in front of it, you might not even need ufw or you have to make sure you allow or restrict ports on both of them

  1. sudo visudo this will open a nano editor for you to change how sudo operates on your system
  2. add your_user ALL=(ALL) NOPASSWD:ALL anywhere inside this file

What this does is allow your user to send sudo commands without using any password, this is extremely important because we will remove root's user password.

A security concern: obviously this also means that if your user is compromised, with your ssh-key (and many stars align in the universe) the attacker has access to the entire system. If you are absolutely scared of this, don’t remove password on anything.

  1. Remove root's password sudo passwd -d root

Remove SSH password authentication entirely:

  1. sudo vim /etc/ssh/sshd_config
  2. / on your keyboard to enter search mode
  3. write PasswordAuthentication to find the correct line
  4. uncomment this line, by going in edit mode i and removing the # comment
  5. leave it set on no to disable password authentication
  6. sudo systemctl restart ssh to restart the SSH demon

5.5 Conclusion

You should now have:

  1. Your own keypair, public and private.
  2. Your server user with no password and a properly configured SSH key authorized_keys access
  3. root user with no password
  4. the ability to use sudo with no password for your user
  5. the SSH demon that denies any password access
  6. port 22 allowed by the ufw firewall that is enabled

Registering a domain and why you’re not going to actually use it for this document

As for registering your domain, in this document I will not actually explain to you how to self-host things from home for the public. I feel that’s a bit out of scope, and also the variables to go through to make sure everyone can port forward the right ports to the right devices are too many and frankly I could not care less. That said, at this point there’s also no reason to jot down what you would do if you were self-hosting on a cloud server, which is already publicly accessible, because most server providers allow all ports. So why am I making you buy a domain? The reason is quite simple as, technically, you could have ANY domain for your internal network, but what’s best is when you actually own the domain you use, so that you are also able to avoid any .local hostnames which are quite boring looking. Trust the word of a .wtf, .life and most importantly: a .moe owner.

I know some people will want to “sign certificates for HTTPS” even inside their own local network, and never expose anything. Why? Anyway… that’s on you. The best I can do is leave some resources at the end of this chapter so that you can go on your way and learn about that. NGINX proxy manager and Cloudflare will make that process extremely easy anyway.

But Wise? What’s the point of making this document if you’re not even gonna explain how to expose things to the internet that we might need? Like, Nextcloud for example.

Well, young padawan, first of all you should install Seafile, not Nextcloud. Second of all, as I’ve explained earlier, this is out of scope. I am writing about ABSOLUTE BASIC essentials of how to navigate this world. And this chapter is about doing the best thing possible with absolutely zero knowledge or very little. You cannot possibly expect me to write - in this document - how to (not in order):

  1. Install a LEMP stack - which includes configuring PHP, PHP-FPM, MySQL and NGINX

  2. Create a MySQL database

  3. Create a Nextcloud virtualhost (an absolute pain in the ass)

  4. Explain what NAT is, because why would I talk about port forwarding otherwise? I don’t even know what NAT is myself (don’t let my boss see this)

  5. Actually port-forward your 443 and 80 ports to whatever device your using, on whatever modem/router/firewall you are using

  6. Configure a public DNS view on Cloudflare (with the possibility of having to consider people with dynamic IPs and what that entails)

  7. Configure Certbot to work nicely with NGINX safely (and not with NGINX plugins, which are fine if you know how it all works without them)

  8. Actually deploy Nextcloud

  9. Troubleshoot all of the PHP dependancies errors

  10. Troubleshoot all of the DB issues

  11. AND MANY MORE THINGS…

As you can see, it gets out of hand very quickly. And SURE! FINE! you can do all this with Docker and a Linuxserver image. Go on then, do it. Using Docker as a newbie isn’t LEARNING; it’s like pedaling a bicycle with safety wheels. But with no brakes, and actually… your handle is a Snickers bar and you have diabetes. And there will be no insulin left for when you slowly fall asleep. Then suddenly you wake up sweaty, and your Docker containers have all shut down, you have no backups because you didn’t learn anything - and your filesystem is full because YOU made it create 1600 images, 54200 volumes, 1 billion lines of log, 700 networks and to top it off: you aren’t even using Docker-Compose! So you have no fuckin’ clue how you actually ran those images in the first place.

As a side-note, I love Docker, I use it all the time for all of those things that are hell to deploy. Namely: anything in Java, PHP or any other thing some LFD (lazy-fucking-developer) on GitHub didn’t bother to BUILD a release for, and just SLAPPED a Docker image and a weird fucking “““ₜᵤₜₒᵣᵢₐₗ””” in the readme. Or worse… a DOCKERFILE.

Registering a domain is quite simple. All you gotta do is pay. So, pick a domain name that sounds cool to you, any TLD (Top-level domain: the .com part) and buy it. Cool, you now have a domain, and for the purpose of this document, this is done and we will not touch it. But you have it and it’s yours and you will definitely use it later.

A final word

While writing this document, I have decided that I should stop introducing new arguments. I cut down (believe it or not) 50% of the content I actually wrote for this document. It became way too much and I have decided to write little, smaller documents for specific things. For this reason the document might look a bit all over the place. But I believe I have done a good job at giving a solid and safe starting point to would be “self-hosters”.

You know, actually what I see most, even at work is how people simply fail the most basic of workflows. Even just connecting to a server, or make a decision on a protocol. I still see people favoriting FTP over anything SSH or HTTP, and that drives me nuts.

In this sense, I believe my document is as scary and off-putting as I want it to be, because you should really, really be passionate about this hobby, if you actually want to learn and do things the right way. Because believe me, there are an awful amount of bad ways to go about self-hosting.

Resources for the IT alchemist

  • A Compendium of Awesome-Lists -> This tracks many awesome-lists.

  • Awesome-Selfhosted -> Your bible. Keep this on track to find self-hostable open source software.

  • NGINX Admin guide -> Trust me and read this. Once you learn NGINX your life will be 100 times easier.

  • Keep tabs on the selfhosted Reddit -> Find help, give help and discover new stuff.

  • CrowdSec -> Install this fail2ban upgrade on anything you expose to the internet

  • Hetzner.com and Online.net -> European server hosting with competitive prices and good stability. Specifically Server auction - Hetzner Online GmbH to find absurdly good prices on good hardware

  • LinuxGSM -> An open source project of scripts that install easy-to-use gameservers

  • The Linuxserver Fleet -> The good guys of Docker, they make Docker images for popular software so that you don’t have to deal with Java

  • Any documentation you will not read because you want to skip steps

  • Namecheap -> To buy domains

  • Cloudflare -> The best way to manage your DNS (short of hosting it yourself)

  • Node-red -> If you want to be a fake programmer but you like automation and coding sucks

  • The Servarr Wiki -> If you want to be a pirate of this century and who the fuck uses USENET anymore.

  • sudo ufw allow 22/tcp && sudo ufw enable

  • Mailcow -> If you’re crazy enough to host your own email and EVERY other suite sucks. Especially Zimbra. Fuck Zimbra.

Want to support me?

Find all information right here

You can also support me here:

Credits

  • My mom