Here's a fact that surprises most teams running Kubernetes: by default, every pod can talk to every other pod in the cluster. Your frontend can reach your database directly. A compromised pod in a staging namespace can access production services. There are no firewalls, no access controls, nothing.
This is fine for development. It's a security incident waiting to happen in production.
The default-allow problem
Kubernetes ships with a flat network model. The CNI plugin (Calico, Cilium, Flannel) gives every pod an IP address, and by default, every pod can reach every other pod on any port. This means:
- A vulnerability in one service gives an attacker lateral movement across the entire cluster
- A misconfigured service can accidentally connect to the wrong database
- Staging workloads can reach production databases
- Any pod can make outbound connections to any internet address (data exfiltration risk)
Network Policies: the Kubernetes firewall
Network Policies are Kubernetes-native resources that control traffic flow at the pod level. They work like firewall rules: you define which pods can communicate with which other pods, and on which ports.
The key principle: default-deny, explicit-allow. Start by blocking everything, then create allow rules for the traffic you actually need.
Step 1: Default deny all ingress
Apply this to every namespace. It blocks all incoming traffic to pods unless explicitly allowed:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
Step 2: Allow specific traffic
Now create policies for each legitimate traffic flow. For example, allow the API to reach the database on port 5432:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-api-to-postgres
namespace: production
spec:
podSelector:
matchLabels:
app: postgres
ingress:
- from:
- podSelector:
matchLabels:
app: api
ports:
- port: 5432
Step 3: Deny egress (the forgotten direction)
Most teams focus on ingress but forget egress. Without egress policies, a compromised pod can exfiltrate data to any external server. Apply default-deny egress and allow only:
- DNS (port 53 to kube-dns)
- Specific external services your app needs (API gateways, SaaS tools)
- Internal services by label selector
Common mistakes
1. Choosing a CNI that doesn't support network policies
Flannel, the default CNI in many clusters, does not enforce network policies. You need Calico, Cilium, or Weave Net. If you're on a managed service: EKS uses VPC CNI (needs Calico addon), GKE supports it natively, AKS uses Azure CNI with Calico.
2. Not testing policies before applying
A misconfigured network policy can break your application. Always test in staging first. Tools like kubectl-np-viewer and Cilium's Hubble let you visualize what's allowed and denied before applying.
3. Forgetting DNS
When you apply default-deny egress, you block DNS too. Your pods can't resolve service names. Always include an egress rule allowing traffic to kube-dns in the kube-system namespace.
4. Not doing namespace isolation
Pods in different namespaces can talk to each other by default. Always include namespace selectors in your policies to prevent staging→production traffic.
Our recommended approach
- Map your traffic flows — document every legitimate service-to-service connection
- Install Cilium or Calico — if not already present
- Apply default-deny in staging first — fix any breakage
- Create allow rules — one per traffic flow
- Add egress rules — DNS + specific external services
- Monitor with Hubble or Calico Enterprise — visualize drops and allows
- Roll to production — with the same policies
We do this for every client engagement. It typically takes 2–3 days of focused work — and it's one of the highest-impact security improvements you can make to a Kubernetes cluster.