Mandatory acknowledgement of the fact that a new year has begun

The first article of the year, woo! I am a slave to cynicism. There’s gonna be no year wishes in this article. Let’s hope people behave this time, I’ll leave it at that. Over and out.

Introduction

If you’ve read my articles in the past (or my Reddit comments), then you know that I am vehemently against using Docker for everything, specifically when you don’t know how those things work outside of launching a Docker image. One of these things is NGINX. There are some GREAT NGINX images out in the wild, like the ones from the good people at Linuxserver.io. And yet, I still find it imperative that you LEARN how to set up a simple reverse proxy virtual host on your own. Running a reverse proxy in a Docker network just doesn’t make sense, at all. It’s like placing the entrance door to your house behind a solid wall and leaving a pickaxe by the front yard.

If you want to see me ramble about this thing like an idiot for 2 hours straight, you can watch the VOD of me preparing for this article on YouTube or follow me on Twitch where I stream how I prepare for these articles live, with practical example that you can follow.

Requirements

  1. You must know how to open/forward ports to whatever you are installing NGINX on
  2. You must have something you can install NGINX on
  3. You must have something to create a reverse proxy virtualhost for
  4. Something to drink while you read
  5. A good album to listen to

DISCLAIMER: I am using Ubuntu 20.04 LTS. So most of what I do applies to Debian-based distros. I am also assuming that you do not have NGINX already installed. If you do, you need to remove them.

Installing NGINX, the correct way

Sure, you could simply do apt install nginx right off the bat, but have you ever asked yourself which version it installs? I can assure you, it’s probably not 1.21.5. NGINX Download center.

So, what we’re gonna do is a bit different, we’re gonna install it from the NGINX repository itself. Meaning that we will get the latest mainline version and not whatever the people at Ubuntu have packaged for apt.

First, make sure you have root access and while you’re at it, just launch sudo apt update && sudo apt upgrade -y so you’re all cosy and upgraded. After that’s done, access the root shell: sudo -i.

I am assuming that you do not have NGINX already installed. If you do, you need to remove them.

Install the requirements:

sudo apt install curl gnupg2 ca-certificates lsb-release ubuntu-keyring

  • curl is a tool to transfer data from different transfer protocols (http, ftp, imap, smb…)
  • gnupg2 OpenPGP standard protocol tool to cipher, manage and create digital signatures
  • ca-certificates contains the Certificate Authorities shipped by Mozilla's browser
  • lsb-release is the Linux Standard Base
  • ubuntu-keyring centralized and secure location for keys inside Ubuntu

Download the NGINX GPG key:

curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor \
| sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null
  • Try to open https://nginx.org/keys/nginx_signing.key and view the contents of the file
  • gpg --dearmor is a tool that converts input or output into/from an OpenPGP ASCII armour. The NGINX signing key is OpenPGP armoured, therefore we de-armour it. Further reading. It’s all a fancy way to avoid this mess
  • tee reads from stdin and writes to stdout and files. Don’t know what those are? Thankfully I wrote about it

Verify that the GPG key is valid:

gpg --dry-run --quiet --import --import-options import-show /usr/share/keyrings/nginx-archive-keyring.gpg

  • --dry-run doesn’t make changes to the key
  • --quiet mutes some of the output
  • --import adds the key to the keyring
  • --import-options adds options to importing keys
  • import-show shows the contents of the key when paired with --dry-run. show-only combines the two arguments

Example of a successful output:

pub   rsa2048 2011-08-19 [SC] [expires: 2024-06-14]
      573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
uid                      nginx signing key <signing-key@nginx.com>

Now decide whether you want the mainline repository or the stable repository, the difference is explained here. In short, mainline gets updates, bugfixes and new features faster, and generally stable is one minor version behind and only gets major bugfixes. stable is a bit confusing, it’s truly not more stable than mainline which is also generally regarded as more reliable and is then recommended by NGINX itself.

NGINX Stable

echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/ubuntu `lsb_release -cs` nginx" \
    | sudo tee /etc/apt/sources.list.d/nginx.list

NGINX Mainline

echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/mainline/ubuntu `lsb_release -cs` nginx" \
    | sudo tee /etc/apt/sources.list.d/nginx.list
  • echo echoes what is after it
  • lsb_release -cs prints distribution-specific information -c displays code name of the installed distribution -s does so in a short format

Set up APT Pinning so that you favour NGINX’s repository when installing/upgrading NGINX.

echo -e "Package: *\nPin: origin nginx.org\nPin: release o=nginx\nPin-Priority: 900\n" \
| sudo tee /etc/apt/preferences.d/99nginx

Finally:

  • sudo apt update
  • sudo apt install nginx
  • nginx -v should output nginx version: nginx/1.21.5

fi

Install Certbot

Certbot is installed through snap for most systems. I will not fight over this, I do not care. When I’m at work I will weigh whether or not I want to install snap on my system, at home, I do not care. Pick your System and your Software and follow the directions (I picked NGINX and Ubuntu 20)

  • snapd is already installed on Ubuntu 20.04LTS
  • sudo snap install core; sudo snap refresh core to update snapd
  • sudo snap install --classic certbot install certbot via snap
  • sudo ln -s /snap/bin/certbot /usr/bin/certbot symbolically link the binary to your PATH

fi

Brief introduction to NGINX

NGINX has many uses, today we are setting up a virtual host to function as a reverse proxy, but technically NGINX can be a load balancer, a web server, a content cache and even a mail proxy.

It works under a precise structure of directives and contexts. The origin of its configuration (found in /etc/nginx.conf) is inside a context called main and it’s called the http{} directive, and cascading under it, the server{} directive, and inside this last one the location{} directive.

So essentially, everything is included into each thing following this order:

main {
    events{}
    https{
        server{
            location{

            }
        }
    }
}

There are many more directives in NGINX, and each one of them has specific places they go to. Some can only go inside the server{} directive (or server block), some only inside the location{} directive.

It will all come clear once you configure your first virtual host, so for now… some further reading:

Configuring the snippets

The snippets folder inside /etc/nginx is something I create to add bits of configurations that are almost always used in my virtual hosts. It’s just a convenient way to share them across multiple configurations without having to rewrite all of them all the time. So, follow along:

I assume you are working on this from root, if not, be my guest and use sudo 300 times

  • mkdir /etc/nginx/snippets to create the snippets folder

My snippets configurations are divided into three files:

  1. proxy_params.conf contain all the proxy header directives
  2. ssl_params.conf contain all the add_header directives
  3. unwanted_methods.conf locks out some tracking and tracking methods for http

Create the first snippet:

vim /etc/nginx/snippets/proxy_params.conf

add the following contents (If you’re not used to vim (and you should) use nano instead):

proxy_set_header      Host $host;
proxy_set_header      X-Real-IP                 $remote_addr;
proxy_set_header      X-Forwarded-Port          $server_port;
proxy_set_header      X-Forwarded-For           $proxy_add_x_forwarded_for;
proxy_set_header      X-Forwarded-Proto         $scheme;
proxy_set_header      X-Forwarded-Host          $host;
proxy_set_header      X-Forwarded-Server        $host;
proxy_set_header      Proxy "";
proxy_http_version      1.1;

proxy_set_header allows redefining the request body passed to the proxied server. The value can contain text, variables, and their combination.

  • Host specifies the host and port number of the server to which the request is being sent
  • X-Real-IP identifies the client’s IP address
  • X-Forwarded-Port identifies the listener port number that the client used to connect
  • X-Forwarded-For identifies the originating IP address of a client that is connecting
  • X-Forwarded-Proto identifies the originating protocol a client is using to connect (HTTP or HTTPS)
  • X-Forwarded-Host identifies the original host requested by the client
  • X-Forwarded-Server identifies the hostname of the proxy host
  • proxy_http_version sets the HTTP protocol version for proxying

Create the second snippet:

vim /etc/nginx/snippets/ssl_params.conf

add the following contents:

add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
add_header X-XSS-Protection 1;
add_header Referrer-Policy same-origin;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff" always;
add_header Permissions-Policy ";midi=();notifications=();push=();sync-xhr=();microphone=();camera=();magnetometer=();gyroscope=();speaker=(self);vibrate=();fullscreen=(self);payment=();";
proxy_cookie_path / "/; HTTPOnly; Secure";
ssl_stapling on;
ssl_stapling_verify on;
add_header Content-Security-Policy $DO_IT_YOURSELF ;

add_header adds the specified field to a response header provided that the response code equals 200, 201 (1.3.10), 204, 206, 301, 302, 303, 304, 307 (1.1.16, 1.0.13), or 308 (1.13.0).

  • Strict-Transport-Security (or HSTS) informs web browsers that this website should ONLY be accessed via HTTPS, and all attempts to HTTP are to be redirected to HTTPS. Not doing so would break the website.
  • X-XSS-Protection it’s a header that informs browsers to stop loading a website when it detects Cross-site scripting
  • Referrer-Policy controls how much Referrer information browsers need to include with requests
  • X-Frame-Options defined whether or not a website should be rendered from frame, iframe, embed or object. To avoid Click-jacking
  • Permissions-Policy defines which features and information a website is allowed to use from a web browser and its underlying client host. Permissions Policy Explainer
  • proxy_cookie_path changes the contents of the Set-Cookie parameter in header fields Read more
  • ssl_stapling and ssl_stapling_verify Read OCSP stapling it is used to verify the the status of Certificates Revocation Status
  • Content-Security-Policy defines how any kind of content is loaded on a website from a browser. Anything on a web page. For this reason, this is a fuckin’ pain in the ass. You NEED to read more on CSP by Mozilla here, and other than that, run your website through the many hardening checkers online that I will have listed in the Useful resources part of this document.

Create the third snippet:

vim /etc/nginx/snippets/unwanted_methods.conf

add the following contents:

if ($request_method ~ ^(TRACK|TRACE)$ )
  {
    return 405;
  }

Disables the TRACK and TRACE http methods. Read more here.

fi

Create your simple virtual host

You might have noticed that the default installation for NGINX from their repository does not create the sites-available and sites-enabled folders that would be otherwise created if you installed NGINX from Ubuntu's repositories. I have no idea if this is a version thing or a packaging change. What I do know is that CentOS also does the same, and I hate it.

So, (as root) create both folders under /etc/nginx/

  1. mkdir /etc/nginx/sites-available/
  2. mkdir /etc/nginx/sites-enabled/

NGINX has many different checking systems so that you don’t make mistakes, but this is one of the most useful ones. And it should still be used.

Now edit /etc/nginx/nginx.conf and add include /etc/nginx/sites-enabled/*.conf; just under include /etc/nginx/conf.d/*.conf; near the end of the file. This edit will make NGINX load any .conf file under sites-enabled which is where we will link our virtual hosts after they are created (and after they have a valid SSL certificate). This is because, since we’re in the http block (as I explained earlier) that include is considered down the NGINX's structure.

Create the virtual host inside /etc/nginx/sites-available/

MAKING SURE YOU CHANGE yourdomain.com WITH YOUR ACTUAL DOMAIN, EVERYWHERE, vim /etc/nginx/sites-available/yourwebsite.com.conf (don’t forget the .conf or it won’t consider it)

and add the following content

server {
        listen 80;
        server_name yourdomain.com;

        include snippets/unwanted_methods.conf;

        return 301 https://$host$request_uri;
}

server {
        listen 443 ssl http2;
        server_name yourdomain.com;

        error_log /var/log/nginx/yourdomain.com-error.log;
        access_log /var/log/nginx/yourdomain.com-access.log;

        include snippets/unwanted_methods.conf;
        include snippets/unsafe_ssl_params.conf;
        include snippets/proxy_params.conf;
        include /etc/letsencrypt/options-ssl-nginx.conf;

        ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

        server_tokens off;

        http2_push_preload on;

        location / {
                proxy_pass              http://127.0.0.1:YOUR_DOCKER_CONTAINER_PORT;
        }
}

Let’s break this thing down:

  • server defines the server blocks (or virtual hosts) for this configuration. There’s two of them, one defines HTTP and the other HTTPS, you can see this because one listens to port 80 and the other to port 443 which are the default ports for HTTP and HTTPS respectively.

In the first block:

  • listen defines which port this server block will listen on. And NGINX will open a listening socket on that port and wait for connections. Needless to say, if this port isn’t open and forwarded to this NGINX host, nothing will work.
  • server_name defines the domain or fully-qualified domain name (hostname) that the request comes from on this server block. Meaning that it will match the request to this server block. If this host contains server_name mars.com a request coming on this NGINX for venus.com will not match this server block. But venus.com if it exists.
  • include is self-explanatory: it includes the contents of the given file as if they were written literally inside this .conf file.
  • return 301 means that if a request comes in HTTP to this server block, NGINX will return HTTP 301 and redirect the request to HTTPS using the variables that default to the host of the request (so server_name) and the request_uri so whatever the client requested.

In the second block, we have additional directives:

  • error_log and access_log define where NGINX writes logs for this specific block. Without these, all logs would be written inside the default NGINX logs (/var/logs/nginx/access.log and /var/logs/nginx/error.log)
  • more include(s) for all of the snippets we created earlier. These mostly default for all configurations.
  • there is one additional include for Letsencrypt's security parameters, this is created when you run and install Certbot
  • ssl_certificate and ssl_certificate_key define where this server block should go find the certificate and key needed to run HTTPS configurations. /etc/letsencrypt/live/yourdomain.com/fullchain.pem and privkey.pem respectively are default paths for when letsencrypt requests and writes your certificate stuff. The only thing that changes is obviously yourdomain.com
  • ssl_dhparam is the key used to make sure OpenSSL knows how to operate the Diffie-Hellman key exchange, these are generated by Letsencrypt as well.
  • server_tokens off disables emitting public information about the NGINX version and host in error messages and the Server header field.
  • http2_push_reload on is a way for the server to give the client information about what it needs to ask for when loading the web page, without a prior request. It’s faster. This also includes anything it needs to render a page, if the client knows it has to ask for a .css style then the server also sends anything like what might be required for it to work properly, like a font.

The second server block also contains a location block, this is what defines what happens when a request for the defined URI (or path) is requested, in this example, we have / which is the root of the domain: https://yourdomain.com/ ->/<-.

  • proxy_pass is the proxy directive, it tells NGINX who to pass the requests to, in this case, this will be our Docker container listening on a port. For example, Sonarr listens by default on 8989 for its Web-UI, so in that case, we would use 8989 for the port. The 127.0.0.1 part depends on if Docker is running on the same machine as NGINX is, otherwise you might want to put in the other local IP of the Docker host, making sure that NGINX is capable to reaching that IP and http:// vs ``https://depends on what the container itself expects the requests to come to, some applications require you toproxy_passtohttps://` because they only listen on the secure protocol.

fi

Request your Letsencrypt certificate with Certbot

If you’re not using LE (Letsencrypt) in the year of your Lord Cthulhu 2022, you’re insane. So please follow along:

Keep in mind that your port 80 needs to be accessible from the outside world forever because LE connects to your port 80 (obviously, how can it connect to 443 if you don’t have a certificate)

  1. Make sure that the domain you’re trying to request a certificate for has an A record or a CNAME record that points to an A record in your DNS, for the Public IP of the machine you’re hosting NGINX on.
  2. Make sure that port 80 and port 443 are open in whatever firewall you have, and that they are forwarded internally to the machine you’re hosting NGINX on

DISCLAIMER: We will be using the NGINX plugin for Certbot in this guide. I will not explain how to set up webroot because, frankly, despite the internal war we have on at my workplace, I think it’s insanely useless to set up and just more work that is not needed. Thank you very much.

Verify that you have everything set up correctly to request a certificate:

certbot certonly --nginx --dry-run -d yourdomain.com

  • certbot invokes the Certbot binary
  • certonly requests only the certificate and doesn’t install it anywhere
  • --dry-run tries a run to request the certificate against Letsencrypt’s testing CA, and not the production one. If you fail 5 requests consecutively to the production CA Letsencrypt will block you for 12/24 hours. So make sure not to miss this argument
  • -d defines the domain the request is for

If everything is successful, it should say that the dry run was successful. If it doesn’t, it will clearly say what your problem is. Often the two things are:

  1. LE cannot reach port 80 on the NGINX host
  2. You have not correctly set up domain records so that the domain you are requesting a certificate for, is resolved to the NGINX host

Try again until you are successful, when you are… continue.

So now you can just simply remove the --dry-run argument, let LE do its thing, and it will write your shiny new certificates inside the folders we defined earlier for the certificate and key inside the server block.

fi

Enable your virtual host

Now that we have done everything, the only thing that remains to do is to check if we fucked up somewhere during the procedures:

  • Symbolically link the virtual host inside the sites-enabled folder we created with ln -s /etc/nginx/sites-available/yourdomain.com.conf /etc/nginx/sites-enabled/

Now that the .conf file is inside the sites-enabled folder, NGINX is now primed to load it after its first reload. Now, we don’t want to blindly reload NGINX without knowing if it’s gonna work, so we launch nginx -t to test its entire configuration, if the test is successful, then you can go ahead and do nginx -s reload to RELOAD (not RESTART) the configuration. NEVER restart NGINX unless you must need to.

Reloading the configuration will make the NGINX workers finish handling whatever requests they have on right now and after reload the configuration. If you restart NGINX it will truncate whatever connection it’s handling right that moment and restart everything.

fi

Final words

That concludes this document. Hopefully, you’re now able to configure any virtual host for whatever container you’re running on Docker. As is with anything in this world, NGINX is much more complicated than what I wrote in this document. It can do MANY different things, and not everything you need to proxy is as simple as I’ve written about. You WILL encounter some issues, but hopefully, you’re at least a little bit better prepared for what’s coming your way.

Useful resources

My score used to be A+ but it’s now lowered to B+ because I am using Umami so I run scripts on the website.

Want to support me?

Find all information right here

You can also support me here:

If you want to see me ramble about this thing like an idiot for 2 hours straight, you can watch the VOD of me preparing for this article on YouTube or follow me on Twitch where I stream how I prepare for these articles live, with practical example that you can follow.

Credits

  • My mom
  • Homelabber

Check out The Hall of Fame

You can download the markdown version of this guide from here