测试验证k8s功能

k8s功能点

  1. 容器调度策略管理, 能够指定app部署在指定目标服务器。
  2. 容器节点的创建或销毁
  3. 容器(应用)按需求水平扩展
  4. 健康状态检查与异常恢复
  5. 容器的资源管理与限制
  6. 服务及应用管理
  7. 负载均衡
  8. 应用配置的管理

指定app部署在指定目标服务器

1、查看node的labels, k8s主要是通过labels来实现多维度的资源分组管理功能。

1
2
3
4
5
6
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl get node --show-labels
NAME STATUS AGE LABELS
te-ipopai2hb7-0-edce6decnmfd-kube-minion-guvv4egh2oeo Ready 4d beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/zone=nova,kubernetes.io/hostname=te-ipopai2hb7-0-edce6decnmfd-kube-minion-guvv4egh2oeo
te-ipopai2hb7-1-t7xfovwwvv2r-kube-minion-yih3r5xg2pay Ready 1h beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/zone=nova,kubernetes.io/hostname=te-ipopai2hb7-1-t7xfovwwvv2r-kube-minion-yih3r5xg2pay
te-ipopai2hb7-2-snxnzgqn4unf-kube-minion-jurzrnmkamjb Ready 1h beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/zone=nova,kubernetes.io/hostname=te-ipopai2hb7-2-snxnzgqn4unf-kube-minion-jurzrnmkamjb
te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f Ready,SchedulingDisabled 4d beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f

2、使用nodeSelector,选定node节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
example git:(master) ✗ cat hello.yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-nginx
labels:
app: nginx
spec:
containers:
- name: hello-ngix
image: 172.16.130.151:4000/nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
livenessProbe:
tcpSocket:
port: 80
restartPolicy: Always
nodeSelector:
kubernetes.io/hostname: te-ipopai2hb7-0-edce6decnmfd-kube-minion-guvv4egh2oeo
---
apiVersion: v1
kind: Service
metadata:
name: hello-nginx-service
spec:
type: NodePort
sessionAffinity: ClientIP
selector:
app: nginx
ports:
- port: 80
nodePort: 30080

3、 创建hello.yaml, 如下可以看出部署在te-ipopai2hb7-0-edce6decnmfd-kube-minion-guvv4egh2oeo节点上

1
2
3
4
kubectl create -f hello.yaml
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl get pods hello-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE
hello-nginx 1/1 Running 0 32m 10.100.59.5 te-ipopai2hb7-0-edce6decnmfd-kube-minion-guvv4egh2oeo

容器节点上的任务创建或销毁

上面已经部署了hello-nginx pod, 可以使用以下命令进行销毁

1
2
1. kubectl delete -f hello.yaml
2. kubectl delete pod hello-nginx

容器(应用)按需求水平扩展

我们在生产系统中会遇到某个服务需要扩缩容的场景,k8s利用rc的scale机制来完成这些工作。

创建rc-nginx

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
example git:(master) ✗ cat hello-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: rc-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: rc-ginx
spec:
containers:
- name: nginx-2
image: 172.16.130.151:4000/nginx
ports:
- containerPort: 80
kubectl create -f hello-rc.yaml

使用scale 扩容

1
2
3
4
5
6
7
8
9
10
11
12
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl scale rc rc-nginx --replicas=3
replicationcontroller "rc-nginx" scaled
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl get rc
NAME DESIRED CURRENT READY AGE
rc-nginx 3 3 3 3m
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-nginx 1/1 Running 0 2h
nginx 1/1 Running 0 4d
rc-nginx-2r19p 1/1 Running 0 3m
rc-nginx-79553 1/1 Running 0 10s
rc-nginx-c6k7f 1/1 Running 0 3m

使用scale 缩容

1
2
3
4
5
6
7
8
9
10
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl scale rc rc-nginx --replicas=1
replicationcontroller "rc-nginx" scaled
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl get rc
NAME DESIRED CURRENT READY AGE
rc-nginx 1 1 1 6m
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-nginx 1/1 Running 0 2h
nginx 1/1 Running 0 4d
rc-nginx-c6k7f 1/1 Running 0 6m

健康状态检查与异常恢复

k8s pod 的健康检查主要分为2类:

  1. livenesssProbe 探针: 用于判断容器是否存活(Running状态),如果livenesssProbe探针探测到容器不健康,则使用kubelet
    杀掉该容器,并根据容器的重启策略做出相应的处理。如果pod不包含livenesssProbe探针,那么kubelet认为该容器livenesssProbe探针返回值永远是success。
  2. readinessProbe 探针: 用于判断容器是否启动完成(ready 状态), 可以接受请求. 如果readinessProbe探针检测到失败,则pod的状态将会被修改。 endpoint controller将从
    service的endpoint 中删除包含该容器所在pod的endpoint.

使用livenesssProbe探针(TcpSocketAction)通过容器的ip地址和端口号执行tcp检查,如果能够建立tcp链接,则表明健康

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
kind: Pod
metadata:
name: pod-with-healthcheck
labels:
app: nginx
spec:
containers:
- name: hello-ngix
image: 172.16.130.151:4000/nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
timeoutSeconds: 1
restartPolicy: Always

异常恢复主要是通过rc完成的,下面删除rc-nginx中的一个pod,rc检测到系统没有期望的pod数量,
rc就会启动一个pod来满足期望的数量

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-nginx 1/1 Running 0 2h
nginx 1/1 Running 0 4d
pod-with-healthcheck 1/1 Running 0 4m
rc-nginx-c6k7f 1/1 Running 0 34m
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl delete pod rc-nginx-c6k7f
pod "rc-nginx-c6k7f" deleted
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-nginx 1/1 Running 0 2h
nginx 1/1 Running 0 4d
pod-with-healthcheck 1/1 Running 0 4m
rc-nginx-f0v5f 0/1 ContainerCreating 0 2s
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-nginx 1/1 Running 0 2h
nginx 1/1 Running 0 4d
pod-with-healthcheck 1/1 Running 0 4m
rc-nginx-f0v5f 1/1 Running 0 6s

容器的资源管理和限制

设置资源限制(QOS)的原因

如果未做过节点 nodeSelector,亲和性(node affinity)或pod亲和、反亲和性(pod affinity/anti-affinity)等Pod高级调度策略设置,我们没有办法指定服务部署到指定机器上,如此可能会造成cpu或内存等密集型的pod同时分配到相同Node,造成资源竞争。另一方面,如果未对资源进行限制,一些关键的服务可能会因为资源竞争因OOM(Out of Memory)等原因被kill掉,或者被限制CPU使用。

资源需求(Requests)和限制( Limits)

对于每一个资源,container可以指定具体的资源需求(requests)和限制(limits),requests申请范围是0到node节点的最大配置,而limits申请范围是requests到无限,即0 <= requests <=Node Allocatable, requests <= limits <= Infinity。
对于CPU,如果pod中服务使用CPU超过设置的limits,pod不会被kill掉但会被限制。如果没有设置limits,pod可以使用全部空闲的cpu资源。
对于内存,当一个pod使用内存超过了设置的limits,pod中container的进程会被kernel因OOM kill掉。当container因为OOM被kill掉时,系统倾向于在其原所在的机器上重启该container或本机或其他重新创建一个pod。

example-1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
apiVersion: v1
kind: Pod
metadata:
name: pod-with-healthcheck
labels:
app: nginx
spec:
containers:
- name: hello-ngix
image: 172.16.130.151:4000/nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
resources:
restartPolicy: Always

服务及应用管理

首先使用nginx-deployment.yaml文件创建一个Nginx Deployment,文件内容如图所示:

首先创建两个运行Nginx服务的Pod,待Pod运行之后查看一下它们的IP,并在k8s集群内通过podIP和containerPort来访问Nginx服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: nginx-1
spec:
containers:
- name: nginx
image: 172.16.130.151:4000/nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
app: nginx-1
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: nginx-1
type: LoadBalancer
1
2
3
4
查看pod ip
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl get pods -o yaml -l app=nginx-1 | grep podIP
podIP: 10.100.83.3
podIP: 10.100.81.5

在集群内访问Nginx服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# curl 10.100.83.3:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html

使用Service发现服务

创建之后,仍需要获取Service的Cluster-IP,再结合Port访问Nginx服务。

Service可以将pod IP封装起来,即使Pod发生重建,依然可以通过Service来访问Pod提供的服务。此外,Service还解决了负载均衡的问题,大家可以多访问几次Service,然后通过kubectl logs 来查看两个Nginx Pod的访问日志来确认。

1
2
3
4
## 获取ip
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl get service nginx-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service 10.254.145.0 10.0.0.16 80:32182/TCP 10m

在集群内访问Service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl get service nginx-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service 10.254.145.0 10.0.0.16 80:32182/TCP 10m
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# curl 10.254.145.0
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

service 负载均衡,从下面日志看,通过对service的请求,均衡分发到pod上

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl logs nginx-deployment-1286323699-3qsj5
10.0.0.7 - - [25/Sep/2017:10:01:42 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.51.0" "-"
10.0.0.7 - - [25/Sep/2017:10:02:40 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.51.0" "-"
10.0.0.7 - - [25/Sep/2017:10:04:48 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.51.0" "-"
10.0.0.7 - - [25/Sep/2017:10:07:30 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.51.0" "-"
10.0.0.7 - - [25/Sep/2017:10:07:33 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.51.0" "-"
10.0.0.7 - - [25/Sep/2017:10:08:24 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.51.0" "-"
10.0.0.7 - - [25/Sep/2017:10:08:25 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.51.0" "-"
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl logs nginx-deployment-1286323699-j695f
10.0.0.7 - - [25/Sep/2017:10:07:29 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.51.0" "-"
10.0.0.7 - - [25/Sep/2017:10:07:31 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.51.0" "-"
10.0.0.7 - - [25/Sep/2017:10:07:32 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.51.0" "-"
10.0.0.7 - - [25/Sep/2017:10:07:32 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.51.0" "-"

使用LoadBalancer访问服务

这里使用neutron lbass作为负载均衡,
这时候我们可以去界面(horizon)上查看下lb创建情况

注释

创建成功我们给他分配一个浮动ip
注释
通过浮动ip访问应用
注释

坚持原创技术分享,您的支持将鼓励我继续创作!