en

HashiCorp Nomad on an Offshore VPS

Nomad is HashiCorp's workload orchestrator - lighter and simpler than Kubernetes, with support for Docker containers, raw binaries, Java applications, and even Windows services on a single platform. Hosting Nomad on an offshore VPS gives you a workload scheduler with much less operational overhead than k8s and full control over where jobs run. AnubizHost VPS plans give Nomad the root access it needs for container and process management.

Need this done for your project?

We implement, you ship. Async, documented, done in days.

Start a Brief

Nomad vs Kubernetes for Small to Mid Teams

Kubernetes is the de facto orchestrator for large-scale containerized workloads, but it carries enormous operational complexity. A production-grade k8s cluster requires expertise in etcd, the API server, controllers, ingress, CNI, CSI, monitoring, and more. For teams running a handful of services on a few VPSes, k8s is overkill. Nomad takes a different approach. A single Go binary runs in server mode (the control plane) or client mode (the worker). There is no separate etcd, no separate ingress controller mandatory by design, no complex CNI plumbing. Workloads are defined in HCL files that look similar to Terraform. Each job spec declares a driver (docker, exec, java, qemu, etc), resource requirements, and constraints. Nomad places the workload on a healthy client that satisfies the constraints. For teams running 5 to 50 services across 2 to 10 VPSes, Nomad is dramatically easier to operate than k8s. You still get scheduling, health checking, rolling updates, service discovery (via integration with Consul), and secret management (via integration with Vault). What you give up is the broader ecosystem - the Helm chart library, the operator pattern, and the deep integration with cloud-native networking and storage.

Architecture and Cluster Sizing

A Nomad cluster has servers (control plane, typically 3 or 5 in production) and clients (workers that actually run jobs, scale as needed). Servers handle scheduling, state replication via Raft, and the API. Clients connect to servers, advertise their available resources, accept job placements, and report status. For a small cluster, run three server nodes across three offshore VPSes with 2 vCPU / 2 GB RAM each. The servers themselves use minimal resources. Then add as many client nodes as you need worker capacity - usually 2 to 10 VPSes sized to your actual workload. Clients can be any size; common patterns include a mix of small clients for lightweight services and larger ones for memory-heavy applications. Nomad integrates with Consul for service discovery and health checking, and with Vault for secret injection. If you run all three (Vault, Consul, Nomad) on the same set of VPSes, the operational footprint is still smaller than a single production k8s cluster. The combined stack often runs comfortably on three 4 vCPU / 8 GB RAM VPSes plus however many worker clients you need.

Install Nomad on Ubuntu 22.04

Install Nomad from the HashiCorp apt repository (same setup as Vault and Consul). Configure as a server at `/etc/nomad.d/nomad.hcl`: set `datacenter = "dc1"`, `data_dir = "/opt/nomad/data"`, `bind_addr = "0.0.0.0"`, then an explicit `server` block: `server { enabled = true bootstrap_expect = 3 }` (or 1 for a single-server lab setup). Configure TLS for the API and gossip encryption with a shared key. Configure a client node at `/etc/nomad.d/client.hcl`: `client { enabled = true servers = ["server1.yourdomain.tld:4647", "server2.yourdomain.tld:4647", "server3.yourdomain.tld:4647"] }`. Install Docker on client nodes so the docker driver works: `curl -fsSL https://get.docker.com | sh`. Start the agent: `systemctl enable --now nomad`. From any machine that can reach the Nomad API, define a job in HCL. A minimal job spec runs a Docker container - it has a `job` block with a name, a `group` block (typically one group per service), and a `task` block with the driver `"docker"` and config like `image = "nginx:alpine"`. Submit with `nomad job run myjob.hcl` after setting `NOMAD_ADDR` to your server's API endpoint. Watch the job state with `nomad job status myjob` - it should report Running within a few seconds, and the client node assigned should be running the nginx container.

Why Anubiz Host

100% async — no calls, no meetings
Delivered in days, not weeks
Full documentation included
Production-grade from day one
Security-first approach
Post-delivery support included

Ready to get started?

Skip the research. Tell us what you need, and we'll scope it, implement it, and hand it back — fully documented and production-ready.

Anubiz Chat AI

Online