Every kubectl apply in production is an undocumented change. ArgoCD makes every deployment a reviewed, reversible Git commit.
You’ve felt it. That slight, cold sweat when you type kubectl apply -f deployment.yaml directly against the production cluster. The change works, but the "how" and "why" evaporate into terminal history. You’re not alone—with the average Kubernetes cluster now running 400+ pods across 32 nodes (CNCF 2025), manual operations are a ticking time bomb. GitOps, specifically ArgoCD, replaces that anxiety with a declarative, auditable pipeline where your Git repository is the single source of truth. This isn't just a fancy CI/CD wrapper; it's a fundamental shift in how you manage Kubernetes state.
Why Your kubectl Apply Habit is a Liability
Let's be clear: kubectl is for debugging, not for deployment. Manually applying manifests is slow, error-prone, and creates configuration drift. Consider this benchmark:
| Operation | Manual kubectl Process | ArgoCD GitOps Process |
|---|---|---|
| Deploy 15-resource Helm chart | ~25s (copy-paste, run commands) | ~12s (automated sync post-commit) |
| Audit "What changed?" | Scour terminal history, hope logs exist | Instant Git commit diff with author & message |
| Rollback a broken deployment | Find old YAML, re-apply, pray | git revert and sync, or click "Sync" to previous revision |
| Permission control | Anyone with kubectl config can change anything | RBAC tied to Git review & ArgoCD project limits |
The manual process isn't just slower; it lacks the guardrails needed when 96% of organizations report Kubernetes increased deployment frequency (Red Hat State of Kubernetes 2025). More deployments mean more opportunities for human error. GitOps with ArgoCD codifies the process.
Installing ArgoCD and Bootstrapping Your GitOps Pipeline
First, we install ArgoCD on your cluster. It's just another Kubernetes application, managing itself and others. We'll use kubectl one last time for this.
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Wait for the pods to be ready
kubectl wait --for=condition=available deployment -l app.kubernetes.io/name=argocd-server -n argocd --timeout=300s
# Get the initial admin password (for initial login only)
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
Now, expose the UI. For a local k3s cluster (which starts in ~30s vs. ~4min for kubeadm), use port-forwarding. In production on EKS/GKE/AKS, you'd configure an Ingress with OIDC.
kubectl port-forward svc/argocd-server -n argocd 8080:443
Navigate to https://localhost:8080. Login with admin and the password from above. The UI is nice, but the real power is in the CLI and declarative manifests. Install the ArgoCD CLI (argocd) and login:
argocd login localhost:8080 --insecure --username admin --password <YOUR_PASSWORD>
Your cluster is now self-hosting its own deployment automation. The next step is to connect it to your Git repository.
Defining Your First Application: The ArgoCD Application CRD
An ArgoCD "Application" is a Custom Resource Definition (CRD) that defines a source of truth (your Git repo, a Helm chart) and a destination (your cluster and namespace). It's the mapping between a directory in Git and a live Kubernetes state.
Create a file my-first-app.yaml. This defines an application that syncs a simple Kubernetes Deployment from a Git repository.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-nginx-app
namespace: argocd
spec:
# The destination cluster and namespace
destination:
server: https://kubernetes.default.svc # In-cluster API server
namespace: default
# The source of truth: a Git repository
source:
repoURL: https://github.com/your-org/your-k8s-manifests.git
targetRevision: HEAD
path: ./nginx-deployment
# Sync policy: automated with self-healing
syncPolicy:
automated:
prune: true # Delete resources if removed from Git
selfHeal: true # Automatically revert drift
syncOptions:
- CreateNamespace=true # Auto-create the dest namespace if needed
# Project for RBAC grouping (default is 'default')
project: default
Apply this Application definition:
kubectl apply -f my-first-app.yaml
ArgoCD will now monitor the Git repository. When it detects a new commit in the ./nginx-deployment directory, it will automatically synchronize the cluster state to match. You've just created a self-managing deployment pipeline. Check status with argocd app get my-nginx-app.
Taming the Secrets Problem: Sealed Secrets vs. External Secrets
You can't commit plain-text Secrets to Git. Full stop. ArgoCD offers two primary solutions, both involving a controller that runs in your cluster.
Option 1: Sealed Secrets (Bitnami). You encrypt secrets locally with a public key, commit the encrypted blob, and the controller decrypts it in-cluster.
# Install the controller
kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.26.0/controller.yaml
# Create a Kubernetes Secret locally and seal it
kubectl create secret generic my-app-secret --from-literal=api-key=supersecret --dry-run=client -o yaml > plain-secret.yaml
kubeseal --controller-namespace=kube-system < plain-secret.yaml > sealed-secret.yaml
# Now you can commit sealed-secret.yaml safely.
Option 2: External Secrets Operator (ESO). This is the more scalable, cloud-native approach. Instead of storing encrypted data in Git, you store a reference to a secret in AWS Secrets Manager, HashiCorp Vault, or Azure Key Vault. ESO fetches it and creates the native Kubernetes Secret.
You commit an ExternalSecret manifest that points to your vault. This is cleaner and allows for centralized secret rotation outside of Git. For most production setups, especially with cloud providers, ESO is the recommended path.
Managing Dev, Staging, and Prod with Kustomize Overlays
Hardcoding environment values in manifests is a anti-pattern. Kustomize, built into kubectl, is ArgoCD's native tool for environment-specific customization. Your repository structure might look like this:
/your-k8s-manifests
├── base
│ ├── deployment.yaml
│ ├── service.yaml
│ └── kustomization.yaml
├── overlays
│ ├── dev
│ │ ├── replica-count-patch.yaml
│ │ └── kustomization.yaml # references ../../base, adds patches
│ ├── staging
│ │ ├── hpa-patch.yaml
│ │ └── kustomization.yaml
│ └── prod
│ ├── resource-limits-patch.yaml
│ ├── ingress-patch.yaml
│ └── kustomization.yaml
Your ArgoCD Application source would then point to path: ./overlays/prod. Each environment is a clear, patch-based divergence from a common base. This is infinitely cleaner than maintaining three separate copies of your manifests.
Automating Pre and Post-Deploy Tasks with Sync Hooks
Need to run a database migration before the new app version rolls out? Or send a notification to Slack after a successful deploy? Use Sync Hooks, defined via Kubernetes annotations in your manifests.
apiVersion: batch/v1
kind: Job
metadata:
name: db-migration
annotations:
argocd.argoproj.io/hook: PreSync # Runs BEFORE main manifests are applied
argocd.argoproj.io/hook-delete-policy: HookSucceeded # Delete job after success
spec:
template:
spec:
containers:
- name: migrator
image: your-app:migrate
envFrom:
- secretRef:
name: app-db-secret
restartPolicy: Never
---
apiVersion: v1
kind: ConfigMap
metadata:
name: post-deploy-check
annotations:
argocd.argoproj.io/hook: PostSync # Runs AFTER all resources are healthy
spec:
# ... a script or config for a health check ...
Hooks can be PreSync, PostSync, Sync, or SyncFail. They enable complex deployment workflows without leaving the GitOps paradigm.
Locking Down Production: RBAC and Project Scoping
By default, an ArgoCD admin can sync anything to any cluster. For production, you must define Projects and RBAC to limit scope. A Project is a logical grouping of Applications with restrictions.
Create a project that only allows deploying to the production namespace from the main branch of a specific repo:
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: production-project
namespace: argocd
spec:
# Source Repositories allowed
sourceRepos:
- 'https://github.com/your-org/production-manifests.git'
# Destination clusters and namespaces allowed
destinations:
- namespace: production
server: https://kubernetes.default.svc
# Only allow deploying from the main branch
sourceNamespaces:
- argocd
clusterResourceWhitelist:
- group: '*'
kind: '*'
namespaceResourceWhitelist:
- group: '*'
kind: '*'
# RBAC rules can be defined here or integrated with SSO
Then, bind this project to specific users or groups via ArgoCD's RBAC settings (often integrated with OIDC like GitHub Teams or Google Groups). This ensures your developer team can sync dev apps, but only the platform team can sync to the production project.
Debugging the Inevitable: Common ArgoCD Sync Errors
Even with GitOps, things go wrong. Here’s how to diagnose common issues.
1. Sync Hangs with OutOfSync
The UI shows resources as OutOfSync even after a sync. Run argocd app sync <app-name> --prune. Often, it's a resource that couldn't be pruned due to a finalizer or ownership issue. Use kubectl to investigate the specific resource ArgoCD is complaining about.
2. Permission Denied on Sync
You commit a new manifest, but sync fails with:
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:argocd:argocd-application-controller" cannot create resource "pods" in API group "" in the namespace "production"
Fix: The ArgoCD service account needs RBAC in the target namespace. Create a RoleBinding:
kubectl create rolebinding argocd-manager-prod --clusterrole=admin --serviceaccount=argocd:argocd-application-controller -n production
3. Resource Quota or Limit Issues
Sync fails because the cluster is full. You might see an event like:
0/3 nodes are available: insufficient memory.
Fix: Check allocatable resources and adjust your pod's resources.requests in Git:
kubectl describe nodes
kubectl describe quota -n <namespace>
Next Steps: From Basic Sync to Progressive Delivery
You've now moved from manual kubectl commands to auditable, automated deployments. This is the foundation. Your next steps should be:
- Implement a Blue/Green or Canary Rollout: Use ArgoCD's sister project, Argo Rollouts. It replaces the standard Deployment object and provides advanced deployment strategies with automated analysis and rollback based on Prometheus metrics.
- Centralize Monitoring: Point ArgoCD at a Prometheus instance for its own metrics. Monitor sync status, queue depth, and controller health. Set up alerts for sync failures.
- Multi-Cluster Management: Define multiple clusters in ArgoCD's cluster secrets. A single ArgoCD instance can manage deployments across your dev (k3s), staging (EKS), and production (GKE) clusters, with applications targeted to each.
- Automate Cluster Bootstrap: Use ArgoCD to manage itself and other cluster "baseline" tools (Prometheus, Cert-manager, Istio). The App of Apps pattern lets you define a single bootstrap application that creates all other ArgoCD applications, making a new cluster's entire setup declarative.
The shift to GitOps isn't just about new tools. It's about changing your team's mindset: the cluster is not a pet to be hand-fed commands, but a cattle herd whose desired state is versioned, reviewed, and applied automatically. Every change is a commit. Every rollback is a git revert. Your kubectl config becomes a read-only debugging tool, and your deployment process finally has the audit trail it always needed. Stop applying and start committing.