en
HashiCorp Consul on an Offshore VPS
Consul is HashiCorp's service discovery, health checking, and key-value store, used as the connective tissue in many microservice deployments. Hosting Consul on an offshore VPS gives you a private service catalog and a distributed configuration store on infrastructure you control. AnubizHost VPS plans pair the low-latency network and 1 Gbps uplinks Consul gossip needs with root access and crypto payment.
Need this done for your project?
We implement, you ship. Async, documented, done in days.
Consul Use Cases - Discovery, Health, KV, and Mesh
Consul started as a service discovery tool - applications register themselves with Consul on startup and other applications query Consul to find them. It then grew health checking (Consul periodically polls each registered service to confirm it is alive), a distributed key-value store (used for shared configuration like feature flags or database connection strings), and a full service mesh (Consul Connect, which provides mTLS between services without changing application code).
For most teams, the discovery plus KV plus health checking trifecta is the main value. Instead of hardcoding service URLs in environment variables, applications query `http://localhost:8500/v1/catalog/service/database` and get the current healthy address. Configuration changes pushed to Consul KV are picked up by watching processes within seconds. Health checks let load balancers route around failed instances automatically.
Self-hosting Consul on an offshore VPS keeps the service catalog private. Service names, IP addresses, ports, and health check definitions are all sensitive metadata - they reveal your service topology and could help an attacker map an environment. A managed service discovery offering would aggregate that data on a vendor's infrastructure. Self-hosting on a VPS you own keeps the topology in your trust domain.
Single-Node vs Cluster Topologies
Consul supports running as a single server, three-server cluster, or five-server cluster. For development or very small deployments, a single server works fine but offers no HA - if the Consul node restarts, every service that depends on it loses access to the catalog during the downtime window. For production, three servers across three separate VPSes is the minimum supported HA configuration.
Beyond the servers, every host that runs services that register with Consul should also run a Consul client agent. The client agent is a local process that handles service registration, health checks, and the local query API. Applications query their local client agent at 127.0.0.1:8500, and the client agent handles forwarding to the server cluster. This pattern keeps application code simple and reduces network chatter to the server cluster.
A small three-node Consul server cluster runs comfortably on three 2 vCPU / 2 GB RAM VPSes. Heavy workloads (hundreds of services, thousands of health checks per minute) want 4 vCPU and 8 GB RAM per server. The gossip protocol between nodes uses very little bandwidth at steady state, but bootstrapping a new node briefly transfers the full catalog - have enough headroom for that.
Install Consul on Ubuntu 22.04
Install Consul from the HashiCorp apt repository (same setup as Vault): `wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg && echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" > /etc/apt/sources.list.d/hashicorp.list && apt update && apt install -y consul`.
Configure as a server at `/etc/consul.d/consul.hcl`: set `datacenter = "dc1"`, `data_dir = "/opt/consul"`, `server = true`, `bootstrap_expect = 1` (or 3 for a three-node cluster), `bind_addr = "YOUR_VPS_INTERNAL_IP"`, `client_addr = "127.0.0.1"` (only listen on loopback by default; expose UI via reverse proxy), `ui_config { enabled = true }`, and `acl { enabled = true default_policy = "deny" }` to require ACL tokens for everything.
Start: `systemctl enable --now consul`. For a single node, the cluster forms immediately. For a three-node cluster, add the other two nodes' `retry_join` IPs to the config on each, then start all three within a couple minutes of each other. Bootstrap the ACL system: `consul acl bootstrap` returns the initial management token; save it. Create a less-privileged token for daily use. Put Caddy or Nginx with TLS in front of port 8500 for the UI. Register a test service: `consul services register -name=test-svc -address=127.0.0.1 -port=8080` and query it: `dig @127.0.0.1 -p 8600 test-svc.service.consul`. DNS-based service discovery is now working.
Related Services
Why Anubiz Host
100% async — no calls, no meetings
Delivered in days, not weeks
Full documentation included
Production-grade from day one
Security-first approach
Post-delivery support included
Ready to get started?
Skip the research. Tell us what you need, and we'll scope it, implement it, and hand it back — fully documented and production-ready.