Toolkit for deploying n8n to a cloud provider, in a small secure VM. Useful to get started self-hosting n8n.
Stack: Terraform, Ansible, Docker, Caddy, Litestream, n8n
Cloud providers supported:
Provider | Instance Type | Arch. | vCPU | RAM | Storage |
---|---|---|---|---|---|
Hetzner | cax11 |
ARM64 | 2 | 4GB | 40GB SSD |
Digital Ocean | s-1vcpu-1gb |
AMD64 | 1 | 1GB | 25GB SSD |
Linode | g6-nanode-1 |
AMD64 | 1 | 1GB | 25GB SSD |
AWS | t4g.micro |
ARM64 | 2 | 1GB | 30GB SSD |
GCP | e2.micro |
AMD64 | 2 | 1GB | 30GB SSD |
Azure | b1s |
AMD64 | 1 | 1GB | 30GB SSD |
Little project to learn infrastructure-as-code.
- Install dependencies:
brew install terraform
brew install ansible
- Create SSH key pair:
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_n8n_deploy
- Obtain credentials from your cloud provider.
Provider-specific guides
- Hetzner: Obtain a project API token with
Read & Write
permissions. - Digital Ocean: Obtain a personal access token with
Full Access
permissions. - Linode: Obtain an access token with
Read/Write
permissions. - AWS: Obtain an access key and secret key for an IAM user having an
AmazonEC2FullAccess
policy. - GCP: Obtain a JSON file with credentials for a service account with
Compute Admin
role in a project with Compute Engine API enabled. - Azure: Obtain a subscription ID, tenant ID, client ID and client secret for a Microsoft Entra app under a subscription with
Contributor
role assigned.
[!NOTE]
These suggested permissions are kept broad to simplify initial setup. Consider configuring more limited permissions based on your security requirements.
-
Be able to manage DNS records for a domain.
-
Clone this repository:
git clone https://github.com/ivov/n8n-deploy-starter.git
cd n8n-deploy-starter
- Provision VM:
chmod +x provision.sh
./provision.sh
# -> VM provisioned with IP address: 203.0.113.42
- In DNS provider:
Host: {subdomain}.{domain}.{tld} # e.g. n8n.company.com
Type: A
Value: {ip_address} # e.g. 203.0.113.42
Wait for propagation:
dig {subdomain}.{domain}.{tld} +short
# -> 203.0.113.42
- Configure VM and launch n8n:
cd ansible
ansible-playbook 01-user-setup.yml
ansible-playbook 02-system-setup.yml --ask-become-pass
ansible-playbook 03-n8n-setup.yml --ask-become-pass
Sample run
./provision.sh
Select cloud provider:
1) Hetzner
2) Digital Ocean
3) Linode
Enter number (1-3): 1
Enter Hetzner Cloud API token:
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Using previously-installed hetznercloud/hcloud v1.45.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Terraform used the selected providers to generate the following execution plan. Resource actions are
indicated with the following symbols:
+ create
Terraform will perform the following actions:
# hcloud_firewall.main will be created
+ resource "hcloud_firewall" "main" {
+ id = (known after apply)
+ labels = (known after apply)
+ name = "n8n-deploy-fw"
+ rule {
+ destination_ips = []
+ direction = "in"
+ port = "22"
+ protocol = "tcp"
+ source_ips = [
+ "0.0.0.0/0",
]
# (1 unchanged attribute hidden)
}
+ rule {
+ destination_ips = []
+ direction = "in"
+ port = "443"
+ protocol = "tcp"
+ source_ips = [
+ "0.0.0.0/0",
]
# (1 unchanged attribute hidden)
}
+ rule {
+ destination_ips = []
+ direction = "in"
+ port = "80"
+ protocol = "tcp"
+ source_ips = [
+ "0.0.0.0/0",
]
# (1 unchanged attribute hidden)
}
}
# hcloud_server.main will be created
+ resource "hcloud_server" "main" {
+ allow_deprecated_images = false
+ backup_window = (known after apply)
+ backups = false
+ datacenter = (known after apply)
+ delete_protection = false
+ firewall_ids = (known after apply)
+ id = (known after apply)
+ ignore_remote_firewall_ids = false
+ image = "ubuntu-22.04"
+ ipv4_address = (known after apply)
+ ipv6_address = (known after apply)
+ ipv6_network = (known after apply)
+ keep_disk = false
+ location = "nbg1"
+ name = "n8n-deploy"
+ primary_disk_size = (known after apply)
+ rebuild_protection = false
+ server_type = "cax11"
+ shutdown_before_deletion = false
+ ssh_keys = (known after apply)
+ status = (known after apply)
}
# hcloud_ssh_key.main will be created
+ resource "hcloud_ssh_key" "main" {
+ fingerprint = (known after apply)
+ id = (known after apply)
+ name = "n8n-deploy-key"
+ public_key = (sensitive value)
}
Plan: 3 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ server_ip = (known after apply)
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
hcloud_ssh_key.main: Creating...
hcloud_firewall.main: Creating...
hcloud_ssh_key.main: Creation complete after 0s [id=27379004]
hcloud_firewall.main: Creation complete after 1s [id=1905321]
hcloud_server.main: Creating...
hcloud_server.main: Still creating... [10s elapsed]
hcloud_server.main: Creation complete after 14s [id=60019087]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Outputs:
server_ip = "203.0.113.42"
VM provisioned with IP address: 203.0.113.42
cd ansible
ansible-playbook 01-user-setup.yml
Enter name for non-root user: john
Enter sudo password for non-root user:
confirm Enter sudo password for non-root user:
PLAY [User setup] ***
ok: [203.0.113.42]
TASK [Capture non_root_user as ansible_user]
ok: [203.0.113.42 -> localhost]
TASK [Create non-root user with sudo privileges]
changed: [203.0.113.42]
TASK [Ensure ~/.ssh dir exists]
changed: [203.0.113.42]
TASK [Copy SSH public key from root to non-root user]
changed: [203.0.113.42]
TASK [Disable root SSH login]
changed: [203.0.113.42]
TASK [Disable password authentication for SSH]
changed: [203.0.113.42]
TASK [Restart SSH service]
changed: [203.0.113.42]
PLAY RECAP
203.0.113.42 : ok=8 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
ansible-playbook 02-system-setup.yml --ask-become-pass
BECOME password:
Enter IANA timezone (e.g. America/New_York) [Europe/Berlin]:
PLAY [System setup]
ok: [203.0.113.42]
TASK [Update vars.yml with timezone]
ok: [203.0.113.42 -> localhost]
TASK [Set timezone]
changed: [203.0.113.42]
TASK [Update apt cache]
changed: [203.0.113.42]
TASK [Upgrade all packages]
changed: [203.0.113.42]
TASK [Install packages]
changed: [203.0.113.42]
TASK [Configure fail2ban]
changed: [203.0.113.42]
TASK [Enable and start fail2ban]
changed: [203.0.113.42]
TASK [Install ufw]
ok: [203.0.113.42]
TASK [Allow OpenSSH]
changed: [203.0.113.42]
TASK [Allow HTTP]
changed: [203.0.113.42]
TASK [Allow HTTPS]
changed: [203.0.113.42]
TASK [Enable ufw]
changed: [203.0.113.42]
TASK [Enable unattended upgrades]
ok: [203.0.113.42]
TASK [Configure unattended upgrades]
changed: [203.0.113.42]
TASK [Reboot system]
changed: [203.0.113.42]
PLAY RECAP
203.0.113.42 : ok=16 changed=12 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
ansible-playbook 03-n8n-setup.yml --ask-become-pass
BECOME password:
Enter URL for n8n (e.g. n8n.domain.com): lab.n8n.tech
Enter email for n8n user: [email protected]
Enter password for n8n user:
confirm Enter password for n8n user:
PLAY [Configure]
ok: [203.0.113.42]
TASK [Install Caddy prerequisites]
changed: [203.0.113.42]
TASK [Add Caddy GPG key for package verification]
changed: [203.0.113.42]
TASK [Add Caddy repository to apt sources]
changed: [203.0.113.42]
TASK [Install Caddy]
changed: [203.0.113.42]
TASK [Create Caddyfile with provided URL]
changed: [203.0.113.42]
TASK [Start Caddy]
ok: [203.0.113.42]
TASK [Reload Caddy configuration]
changed: [203.0.113.42]
TASK [Install Docker]
changed: [203.0.113.42]
TASK [Add user to docker group]
changed: [203.0.113.42]
TASK [Download docker-compose]
changed: [203.0.113.42]
TASK [Create docker-compose symlink]
changed: [203.0.113.42]
TASK [Create n8n directories]
changed: [203.0.113.42] => (item=/home/john/.n8n)
changed: [203.0.113.42] => (item=/home/john/n8n-files)
TASK [Create .env file]
changed: [203.0.113.42]
TASK [Create docker-compose.yml]
changed: [203.0.113.42]
TASK [Copy n8n-ready-hook.js]
changed: [203.0.113.42]
TASK [Launch n8n]
changed: [203.0.113.42]
TASK [Setup Complete]
ok: [203.0.113.42] => {
"msg": "n8n is ready! Access it at https://n8n.company.com"
}
PLAY RECAP
203.0.113.42 : ok=18 changed=15 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Optionally, to back up your n8n DB to AWS S3 every 10s with 1 week retention:
- Follow this guide to obtain AWS S3 credentials.
Note
S3 providers other than AWS are currently not supported.
- Start backup service:
ansible-playbook 04-backup-setup.yml
- Eventually, to restore the latest available backup:
ansible-playbook 05-backup-restore.yml --ask-become-pass