KUBERNETES
NetworkPolicy网络策略
资源隔离设计 本案例使用NetworkPolicy来进行资源隔离 要实现的目的: 用户isv应用不能访问k8s集群应用 k8s集群应用可以访问isv应用 isv应用可以访问外网 本案例的架构图: 创建isv-demon namespace
1 2 3 4 5 6 7 8 9 |
ns=isv-demo kubectl create ns $ns kubectl config set-context \ --current \ --namespace $ns kubectl label ns $ns name=$ns # 如果kube-system没有label, 则打上 kubectl lable ns kube-system name=kube-system |
定义资源 为了不跟宿主机端口冲突, web的service端口定义为8800
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
apiVersion: apps/v1 kind: Deployment metadata: name: db labels: app: db spec: replicas: 1 selector: matchLabels: app: db template: metadata: labels: app: db spec: containers: - name: couchdb image: "couchdb" ports: - containerPort: 5984 env: - name: COUCHDB_USER value: admin - name: COUCHDB_PASSWORD value: password --- apiVersion: v1 kind: Service metadata: name: db spec: selector: app: db ports: - name: db port: 15984 targetPort: 5984 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: api labels: app: api spec: replicas: 1 selector: matchLabels: app: api template: metadata: labels: app: api spec: containers: - name: nodebrady image: mabenoit/nodebrady ports: - containerPort: 3000 --- apiVersion: v1 kind: Service metadata: name: api spec: selector: app: api ports: - name: api port: 8080 targetPort: 3000 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: web labels: app: web spec: replicas: 1 selector: matchLabels: app: web template: metadata: labels: app: web spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: web spec: selector: app: web ports: - name: web port: 8800 targetPort: 80 type: ClusterIP |
创建WEB, API and DB Pods/Services
1 2 |
kubectl apply \ -f db-api-web-deployments.yaml |
测试:
1 2 3 4 5 6 7 8 9 10 11 |
web=$(kubectl get svc web -o jsonpath='{.spec.clusterIP}'):8800) echo $web # curl web.isv-demo:8800 curl $(kubectl get svc web -o jsonpath='{.spec.clusterIP}'):8800 kubectl run curl-$RANDOM --image=radial/busyboxplus:curl --rm -it --generator=run-pod/v1 -n isv-demo [ root@curl-3899:/ ]$ curl www.baidu.com <!DOCTYPE html> <!--STATUS OK--><html> <head><meta http-equiv=con [ root@curl-3899:/ ]$ curl http://db:15984 {"couchdb":"Welcome","version":"3.0.0","git_sha":"03a77db6c","uuid":"05616a1b6f1eccbed4f24d3e6d5526d2","features":["access-ready","partitioned","pluggable-storage-engines","reshard","scheduler"],"vendor":{"name":"The Apache Software Foundation"}} [ root@curl-3899:/ ]$ # exit |
定义networkpolicy: deny-all
1 2 3 4 5 6 7 8 9 10 11 12 |
# deny-all-netpol.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all namespace: isv-demo spec: podSelector: {} policyTypes: - Ingress - Egress |
定义进栈和出栈访问:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
# isv-demo-in-out.yaml # 实现: # 1. namespace: isv-demo里的所有pod可以互相访问 # 2. namespace: isv-demo里的所有pod可以访问namespace: kube-system里的coredns的53端口 # 3. namespace: kube-system里的所有pod可以访问namespace: isv-demo里的所有pod # 4. 访问外网限制: namespace: isv-demo里的所有pod可以访问除172.20.0.0/16(物理机)和172.23.0.0/16(calico)网段外的所有网段的80和443端口 kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: isv-demo-in-out namespace: isv-demo spec: podSelector: matchLabels: ingress: - from: - namespaceSelector: matchLabels: name: isv-demo - from: - namespaceSelector: matchLabels: name: kube-system # 这块定义了这三个端口全开放, 可根据需要进行限制 - from: [] ports: - protocol: TCP port: 8800 - protocol: TCP port: 80 - protocol: TCP port: 443 egress: - to: - namespaceSelector: matchLabels: name: isv-demo # to kube-dns - to: - namespaceSelector: matchLabels: name: kube-system podSelector: matchLabels: k8s-app: kube-dns ports: - port: 53 protocol: UDP - port: 53 protocol: TCP # to outside website - to: - ipBlock: cidr: 0.0.0.0/0 except: - 172.20.0.0/16 - 172.23.0.0/16 ports: - protocol: TCP port: 80 - protocol: TCP port: 443 # 注: developer-center为开发者中心namespace, 线上为c87e2267-1001-4c70-bb2a-ab41f3b81aa3 |
创建NetworkPolicy
1 2 3 4 5 6 7 |
# Apply the first NetworkPolicy: deny all Ingress/Egress kubectl apply \ -f deny-all-netpol.yaml # Apply the NetworkPolicy definition related to isv-demo-in-out kubectl apply \ -f isv-demo-in-out.yaml |
测试:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
1. namespace: isv-demo里的所有pod可以互相访问 # 2. namespace: isv-demo里的所有pod可以访问namespace: kube-system里的coredns的53端口 # 3. namespace: kube-system里的所有pod可以访问namespace: isv-demo里的所有pod # 4. 访问外网限制: namespace: isv-demo里的所有pod可以访问除172.20.0.0/16(物理机)和172.23.0.0/16(calico)网段外的所有网段的80和443端口 web=$(kubectl get svc web -o jsonpath='{.spec.clusterIP}'):8800 #======在namespace:isv-demo里测试======= kubectl run curl-$RANDOM --image=radial/busyboxplus:curl --rm -it --generator=run-pod/v1 -n isv-demo # --> 访问isv-demo的web服务 [ root@curl-24497:/ ]$ curl web:8800 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> # --> 访问isv-demo的db服务 [ root@curl-24497:/ ]$ curl http://db:15984 {"couchdb":"Welcome","version":"3.0.0","git_sha":"03a77db6c","uuid":"05616a1b6f1eccbed4f24d3e6d5526d2","features":["access-ready","partitioned","pluggable-storage-engines","reshard","scheduler"],"vendor":{"name":"The Apache Software Foundation"}} # --> 访问外网baidu [ root@curl-24497:/ ]$ curl www.baidu.com <!DOCTYPE html> <!--STATUS OK--><html> # --> ping外网baidu [ root@curl-24497:/ ]$ ping www.baidu.com -c3 -W2 PING www.baidu.com (61.135.169.125): 56 data bytes --- www.baidu.com ping statistics --- 3 packets transmitted, 0 packets received, 100% packet loss # --> 访问开发者中心的web地址 [ root@curl-24497:/ ]$ curl 172.20.58.132 --connect-timeout 5 curl: (28) Connection timed out after 5001 milliseconds # --> 访问kube-system的某Pod的calico的IP [ root@curl-24497:/ ]$ ping 172.23.166.156 -c3 -W2 PING 172.23.166.156 (172.23.166.156): 56 data bytes --- 172.23.166.156 ping statistics --- 3 packets transmitted, 0 packets received, 100% packet loss # --> 访问isv-demo里的pod的calico ip [ root@curl-24497:/ ]$ ping 172.23.166.151 PING 172.23.166.151 (172.23.166.151): 56 data bytes 64 bytes from 172.23.166.151: seq=0 ttl=63 time=0.243 ms [ root@curl-24497:/ ]$ exit #======在namespace:kube-system里测试======= kubectl run curl-$RANDOM --image=radial/busyboxplus:curl --rm -it --generator=run-pod/v1 -n kube-system [ root@curl-23248:/ ]$ curl web.isv-demo:8800 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> [ root@curl-23248:/ ]$ curl web.isv-demo:8800 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> [ root@curl-23248:/ ]$ curl http://db.isv-demo:15984 {"couchdb":"Welcome","version":"3.0.0","git_sha":"03a77db6c","uuid":"05616a1b6f1eccbed4f24d3e6d5526d2","features":["access-ready","partitioned","pluggable-storage-engines","reshard","scheduler"],"vendor":{"name":"The Apache Software Foundation"}} [ root@curl-23248:/ ]$ # exit |
总结: 首先一定要给namespace打label, 因为NetworkPolicy是根据lable来match的 参考: https://alwaysupalwayson.blogspot.com/2019/09/kubernetes-network-policies-how-to.html https://github.com/mathieu-benoit/k8s-netpol 和 https://ahmet.im/blog/kubernetes-network-policy/ https://github.com/ahmetb/kubernetes-network-policy-recipes Read more…