Kubernetes Network Policies provide a powerful mechanism for controlling traffic flow between pods, namespaces, and external endpoints. By default, Kubernetes allows all pod-to-pod communication, which creates significant security risks in multi-tenant environments. Network Policies enable you to implement micro-segmentation and zero-trust networking principles within your cluster.
Understanding and implementing Network Policies is essential for any production Kubernetes deployment. Without proper network segmentation, a compromised pod can potentially access any other pod in the cluster, leading to lateral movement attacks and data breaches. This comprehensive guide covers everything from basic concepts to advanced implementation patterns.
Understanding Network Policy Fundamentals
Network Policies are Kubernetes resources that control traffic flow at the IP address or port level (OSI layer 3 or 4). They use label selectors to identify pods and define rules for ingress (incoming) and egress (outgoing) traffic. When a Network Policy selects a pod, that pod becomes isolated and only allows traffic explicitly permitted by the policy.
It’s crucial to understand that Network Policies are additive – if multiple policies select a pod, the union of all rules applies. There’s no way to create a “deny” rule; instead, you create policies that allow specific traffic, and everything else is implicitly denied for isolated pods.
Network Policy Components
Every Network Policy consists of several key components that work together to define traffic rules:
- podSelector: Identifies which pods the policy applies to using label matching
- policyTypes: Specifies whether the policy applies to Ingress, Egress, or both
- ingress: Rules for incoming traffic to selected pods
- egress: Rules for outgoing traffic from selected pods
- namespaceSelector: Allows traffic from/to pods in specific namespaces
- ipBlock: Allows traffic from/to specific IP CIDR ranges
Implementing Default Deny Policies
The first step in securing your cluster is implementing default deny policies. These policies ensure that no traffic is allowed unless explicitly permitted, following the principle of least privilege.
# Default deny all ingress traffic in a namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {} # Selects all pods in the namespace
policyTypes:
- Ingress
# No ingress rules = deny all ingress
---
# Default deny all egress traffic in a namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
# No egress rules = deny all egress
---
# Default deny both ingress and egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- EgressAfter applying default deny policies, you’ll need to create additional policies to allow legitimate traffic. This approach ensures that new pods are secure by default and require explicit permission to communicate.
Common Network Policy Patterns
Allow Traffic from Specific Pods
The most common pattern is allowing traffic between specific application components. For example, allowing frontend pods to communicate with backend API pods:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-api
namespace: production
spec:
podSelector:
matchLabels:
app: api
tier: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
tier: web
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 8443Allow Traffic from Specific Namespaces
In multi-tenant environments, you often need to allow traffic from pods in specific namespaces while blocking traffic from others:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-monitoring
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
purpose: monitoring
podSelector:
matchLabels:
app: prometheus
ports:
- protocol: TCP
port: 9090Allow DNS Traffic
When implementing egress policies, you must explicitly allow DNS traffic, or pods won’t be able to resolve service names:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53Allow External Traffic
Sometimes pods need to communicate with external services. Use ipBlock to allow traffic to specific IP ranges:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external-api
namespace: production
spec:
podSelector:
matchLabels:
app: payment-service
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 203.0.113.0/24 # Payment gateway IP range
ports:
- protocol: TCP
port: 443
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
ports:
- protocol: TCP
port: 443Advanced Network Policy Strategies
Microsegmentation Pattern
Microsegmentation involves creating fine-grained policies for each application component. This limits the blast radius of any security incident:
# Complete microsegmentation for a three-tier application
---
# Frontend can only receive traffic from ingress controller
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-policy
namespace: production
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- port: 80
egress:
- to:
- podSelector:
matchLabels:
tier: backend
ports:
- port: 8080
- to: # DNS
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
---
# Backend can only receive from frontend, connect to database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
namespace: production
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
tier: database
ports:
- port: 5432
- to: # DNS
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
---
# Database only accepts connections from backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-policy
namespace: production
spec:
podSelector:
matchLabels:
tier: database
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- port: 5432
egress: [] # No egress allowedNamespace Isolation Pattern
For multi-tenant clusters, isolate namespaces from each other while allowing traffic within the same namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: namespace-isolation
namespace: tenant-a
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {} # Allow from same namespace
- namespaceSelector:
matchLabels:
shared-services: "true" # Allow from shared servicesTesting and Validating Network Policies
Testing Network Policies is crucial to ensure they work as expected. Here are several approaches:
# Create a test pod for connectivity testing
kubectl run test-pod --image=busybox --rm -it --restart=Never -- sh
# Test connectivity from within the pod
wget -qO- --timeout=2 http://api-service:8080/health
nc -zv database-service 5432
# Use netshoot for advanced testing
kubectl run netshoot --image=nicolaka/netshoot --rm -it --restart=Never -- bash
curl -v http://api-service:8080
nmap -p 5432 database-serviceNetwork Policy Tools and CNI Requirements
Network Policies require a CNI (Container Network Interface) plugin that supports them. Not all CNI plugins implement Network Policies:
- Calico: Full Network Policy support with additional features
- Cilium: eBPF-based with Layer 7 policies
- Weave Net: Supports standard Network Policies
- Flannel: Does NOT support Network Policies by default
- AWS VPC CNI: Requires Calico for Network Policy support
Monitoring and Troubleshooting
Monitoring Network Policy effectiveness requires visibility into network flows. Tools like Cilium Hubble or Calico’s flow logs provide this visibility:
# Cilium Hubble - observe network flows
hubble observe --namespace production --verdict DROPPED
# Calico - enable flow logs
apiVersion: projectcalico.org/v3
kind: FelixConfiguration
metadata:
name: default
spec:
flowLogsFlushInterval: 15s
flowLogsFileEnabled: trueBest Practices Summary
- Always start with default deny policies in production namespaces
- Use labels consistently across your deployments for policy targeting
- Document the purpose of each Network Policy
- Test policies in staging before applying to production
- Monitor dropped traffic to identify misconfigurations
- Use namespace labels for cross-namespace policies
- Remember to allow DNS traffic when implementing egress policies
- Regularly audit and review Network Policies
Conclusion
Kubernetes Network Policies are essential for implementing defense-in-depth security in your cluster. By starting with default deny policies and carefully allowing only necessary traffic, you can significantly reduce your attack surface and limit the impact of security incidents. Remember that Network Policies are just one layer of security – combine them with other controls like Pod Security Standards, RBAC, and runtime security for comprehensive protection.


