en

ArgoCD on an Offshore VPS for Private GitOps

ArgoCD turns git into the single source of truth for Kubernetes deployments. A self-hosted ArgoCD on an offshore VPS lets you run that workflow against private clusters without exposing your manifests, secrets, or deployment history to a managed GitOps SaaS. AnubizHost VPS plans ship clean kernels, open ports, and 1 Gbps uplinks - everything you need for ArgoCD to reach your downstream clusters and pull from your git remote without throttling or surveillance.

Need this done for your project?

We implement, you ship. Async, documented, done in days.

Start a Brief

ArgoCD Architecture and Why VPS Self-Hosting Makes Sense

ArgoCD has four core components - the API server, the repo server (which clones and renders manifests), the application controller (the reconciliation loop that compares git state to cluster state), and a Redis cache. Each runs as a deployment in a Kubernetes namespace, typically called argocd. The controller polls git, the repo server materializes Helm or Kustomize templates, and the application controller pushes changes through the Kubernetes API. A managed GitOps provider sees every commit in every connected repository, every sealed-secret payload it materializes, and every kube context it deploys to. For teams that run private clusters in offshore data centers, that managed visibility is a meaningful disadvantage. Self-hosting ArgoCD on a VPS that you also control means the GitOps engine, the git history, the rendered manifests, and the downstream clusters all live in one trust domain. The VPS you run ArgoCD on does not have to be the same VPS as your worker cluster. A common pattern is one offshore VPS running k3s plus ArgoCD as the management plane, which then deploys to multiple downstream clusters at other offshore providers. This separation lets you rotate the management plane without rebuilding worker capacity.

Sizing ArgoCD for Small and Mid-Size Workloads

For a single team managing 10 to 50 applications across one or two downstream clusters, a 4 GB RAM VPS with 2 vCPU and 40 GB SSD is sufficient. The argocd-server and repo-server pods together use about 800 MB RAM at idle. The application controller scales its memory usage with the number of applications and the size of their manifests - a typical kustomization with 30 to 50 resources adds 50 to 100 MB to the controller's working set. For larger deployments (200+ applications, multiple downstream clusters, lots of helm charts with dependent subcharts), step up to 8 GB RAM and 4 vCPU. The repo server in particular benefits from CPU when it is rendering many helm charts in parallel, and from disk IO when it is cloning large monorepos. Allocate at least 100 GB SSD if your repos are big. Network egress from the ArgoCD VPS goes to two places - your git remote (small, infrequent) and the Kubernetes API of your worker clusters (small, constant polling). Inbound traffic is just your user UI and CLI sessions. A 1 Gbps uplink is overkill but useful when ArgoCD is also handling large helm chart pulls or sync waves with many resources.

Install ArgoCD on k3s on an Offshore VPS

Provision a fresh Ubuntu 22.04 VPS with at least 4 GB RAM. Install k3s with default settings: `curl -sfL https://get.k3s.io | sh -`. Verify the node is Ready: `kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get nodes`. Export the kubeconfig path for convenience: `export KUBECONFIG=/etc/rancher/k3s/k3s.yaml`. Create the namespace and install ArgoCD via the upstream manifest: `kubectl create namespace argocd && kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml`. This deploys all four core components. Wait for all pods to report Ready: `kubectl get pods -n argocd -w`. Typical startup is 2 to 4 minutes on a 4 GB VPS. Retrieve the initial admin password: `kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d`. Expose the API server via a NodePort or via a Traefik ingress (k3s ships with Traefik). For NodePort: `kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'` and check which port was assigned with `kubectl get svc argocd-server -n argocd`. Then put Caddy or Traefik with TLS in front. Login at the resulting URL with username admin and the password retrieved above. Connect your first git repository and your first downstream cluster using the argocd CLI and you have a fully self-hosted GitOps control plane on a server you own.

Why Anubiz Host

100% async — no calls, no meetings
Delivered in days, not weeks
Full documentation included
Production-grade from day one
Security-first approach
Post-delivery support included

Ready to get started?

Skip the research. Tell us what you need, and we'll scope it, implement it, and hand it back — fully documented and production-ready.

Anubiz Chat AI

Online