magnum调研02

问题

通过之前初步调研,magnum确实能把k8s启动起来。但是k8s一些组件都需要自己手动安装,这点很不好。
然后我查看代码,发现最新的ocata代码已经支持heat 部署k8s的dns、dashboard等。monitor在pike版本代码也已经支持。

注释

看到这里有点小激动啊,那说明我用magnum一键能部署一个带高级功能的k8s平台。

准备

这些内容在 magnum-调研01可以查看

1、 准备registry,因为要离线安装(主要gcr.io被墙了),所有搭建自己的registry

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
## 启动私有的registry
docker run -d -p 4000:5000 -v /opt/registry:/var/lib/registry --restart=always --name registry registry:2
## 所需镜像
google_containers/kube-ui:v4
google_containers/pause-amd64:3.0
google_containers/hyperkube:v1.5.3
google_containers/pause:0.8.0
google_containers/kubedns-amd64:1.9
google_containers/dnsmasq-metrics-amd64:1.0
google_containers/kube-dnsmasq-amd64:1.4
google_containers/exechealthz-amd64:1.2
google_containers/defaultbackend:1.0
google_containers/nginx-ingress-controller:0.9.0-beta.11
## 上传镜像(可以简单写个shell脚本自动去做)
registry_url=192.168.21.10:5000
docker tag gcr.io/google_containers/kube-ui:v4 $registry_url/google_containers/kube-ui:v4
docker push $registry_url/google_containers/kube-ui:v4

2、fedora镜像

1
2
3
4
5
6
7
8
9
10
11
12
## 下载fedora镜像
wget https://fedorapeople.org/groups/magnum/fedora-atomic-latest.qcow2
## 上传镜像
openstack image create \
--disk-format=qcow2 \
--container-format=bare \
--file=fedora-atomic-latest.qcow2 \
--property os_distro='fedora-atomic' \
fedora-atomic-latest

使用kolla部署magnum

注: 这里不详细介绍如何使用kolla部署magnum了,kolla-ansible使用可以参考官网文档(https://github.com/openstack/kolla-ansible)

在/etc/kolla/globals.yml 打开magnum、neutron-lbass(因为要使用lb功能),存储使用ceph,基础组件都打开

1
2
3
4
5
[root@master ~]# cat /etc/kolla/globals.yml
enable_horizon_magnum: "{{ enable_magnum | bool }}"
enable_magnum: "yes"
enable_neutron_lbaas: "yes"
[root@master ~]# kolla-ansible deploy

创建magnum k8s template

参数根据自己的实际情况进行调整

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
magnum cluster-template-create kubernetes-cluster-template-1 \
--image fedora-atomic-latest \
--keypair mykey \
--external-network public1 \
--dns-nameserver 8.8.8.8 \
--master-flavor 4C.4G \
--flavor 4C.4G \
--labels kube_dashboard_enabled=True,flannel_backend=host-gw \
--coe kubernetes \
--docker-volume-size 40 \
--floating-ip-enabled \
--volume-driver cinder \
--insecure-registry 172.16.130.151:4000
## 参数说明
这里的参数需要注意的就是 --labels这个参数,因为文档里面没有详细介绍,像
kube_dashboard_enabled,后面的监控啊,调整flannel的参数等等都需要使用这个参数
## 相关代码
代码路径 magnum/magnum/drivers/heat/k8s_template_def.py
![](leanote://file/getImage?fileId=59c2490959398d3414000004)
## 创建成功如下显示
[root@master ~]# magnum cluster-template-show kubernetes-cluster-template-1
+-----------------------+------------------------------------------------------------------+
| Property | Value |
+-----------------------+------------------------------------------------------------------+
| insecure_registry | 172.16.130.151:4000 |
| labels | {'flannel_backend': 'host-gw', 'kube_dashboard_enabled': 'True'} |
| updated_at | - |
| floating_ip_enabled | True |
| fixed_subnet | - |
| master_flavor_id | 4C.4G |
| uuid | 669e5b3a-2cdf-42c1-9b8c-656b0f3bba6a |
| no_proxy | - |
| https_proxy | - |
| tls_disabled | False |
| keypair_id | mykey |
| public | False |
| http_proxy | - |
| docker_volume_size | 40 |
| server_type | vm |
| external_network_id | public1 |
| cluster_distro | fedora-atomic |
| image_id | fedora-atomic-latest |
| volume_driver | cinder |
| registry_enabled | False |
| docker_storage_driver | devicemapper |
| apiserver_port | - |
| name | kubernetes-cluster-template-1 |
| created_at | 2017-09-20T03:09:26+00:00 |
| network_driver | flannel |
| fixed_network | - |
| coe | kubernetes |
| flavor_id | 4C.4G |
| master_lb_enabled | False |
| dns_nameserver | 8.8.8.8 |
+-----------------------+------------------------------------------------------------------+

创建k8s 集群

创建1个master节点和2个node节点的k8s集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
magnum cluster-create --node-count 2 --master-count 1 --cluster-template k8s-cluster-template
## 创建成功如下
[root@master ~]# magnum cluster-show test-3
+---------------------+------------------------------------------------------------+
| Property | Value |
+---------------------+------------------------------------------------------------+
| status | CREATE_COMPLETE |
| cluster_template_id | 4886c5b3-2f0f-4b1a-94a4-7f7c4f6eb0d9 |
| node_addresses | ['172.16.150.164', '172.16.150.169'] |
| uuid | 65c95601-8871-48de-8ebc-e2efe848111c |
| stack_id | 2ba19cad-8ed9-47ab-8340-a4f26b57e333 |
| status_reason | Stack CREATE completed successfully |
| created_at | 2017-09-20T07:20:29+00:00 |
| updated_at | 2017-09-20T07:24:03+00:00 |
| coe_version | v1.5.3 |
| keypair | mykey |
| api_address | https://172.16.150.154:6443 |
| master_addresses | ['172.16.150.154'] |
| create_timeout | 60 |
| node_count | 2 |
| discovery_url | https://discovery.etcd.io/d88395d69fa826a349ad7c7323977cec |
| master_count | 1 |
| container_version | 1.12.6 |
| name | test-3 |
+---------------------+------------------------------------------------------------+

访问k8s集群

访问k8s集群有2种方式
1、 登入k8s master节点能够直接使用k8s命令
2、 远程访问可以使用 magnum cluster-config 生成所需的ca文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
## 直接登陆master节点访问
[root@master ~]# nova list
/usr/lib/python2.7/site-packages/novaclient/client.py:278: UserWarning: The 'tenant_id' argument is deprecated in Ocata and its use may result in errors in future releases. As 'project_id' is provided, the 'tenant_id' argument will be ignored.
warnings.warn(msg)
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+-----------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+-----------------------------------+
| dd8d6bf1-882c-4132-b90e-fbf251a04351 | te-ipopai2hb7-0-edce6decnmfd-kube-minion-guvv4egh2oeo | ACTIVE | - | Running | private=10.0.0.9, 172.16.150.164 |
| 215e1f59-8270-488b-b02a-1f83f0ad8fae | te-ipopai2hb7-1-5z3wzug3hsjg-kube-minion-ktzk5ugdsclg | ACTIVE | - | Running | private=10.0.0.12, 172.16.150.169 |
| e45e4f94-d9d3-4b6b-8632-9437d7fa7101 | te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f | ACTIVE | - | Running | private=10.0.0.7, 172.16.150.154 |
| a435bda3-dbfb-4df1-b426-42f4142f1b4d | te-tcrckexviy-0-66y6o7a76osg-kube-master-yozurhnfidfw | ACTIVE | - | Running | private=10.0.0.12, 172.16.150.158 |
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+-----------------------------------+
[root@master ~]# ssh fedora@172.16.150.154
Warning: Permanently added '172.16.150.154' (ECDSA) to the list of known hosts.
Last login: Wed Sep 20 09:34:09 2017 from 172.16.150.163
[fedora@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f ~]$
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl get node
NAME STATUS AGE
te-ipopai2hb7-0-edce6decnmfd-kube-minion-guvv4egh2oeo Ready 3h
te-ipopai2hb7-1-5z3wzug3hsjg-kube-minion-ktzk5ugdsclg Ready 3h
te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f Ready,SchedulingDisabled 3h
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx 1/1 Running 0 1h
kube-system coredns-2417336708-gv1df 1/1 Running 0 3h
kube-system kube-controller-manager-te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f 1/1 Running 0 3h
kube-system kube-proxy-te-ipopai2hb7-0-edce6decnmfd-kube-minion-guvv4egh2oeo 1/1 Running 0 3h
kube-system kube-proxy-te-ipopai2hb7-1-5z3wzug3hsjg-kube-minion-ktzk5ugdsclg 1/1 Running 0 3h
kube-system kube-proxy-te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f 1/1 Running 0 3h
kube-system kube-scheduler-te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f 1/1 Running 0 3h
kube-system kubernetes-dashboard-953129222-2m1fl 1/1 Running 0 3h
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]#
## 使用magnum cluster-config
[root@master ~]# mkdir -p ~/clusters/kubernetes-cluster
[root@master ~]# $(magnum cluster-config test-3 --dir ~/clusters/kubernetes-cluster)
export KUBECONFIG=/home/user/clusters/kubernetes-cluster/config
[root@master ~]# kubectl -n kube-system get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx 1/1 Running 0 1h
kube-system coredns-2417336708-gv1df 1/1 Running 0 3h
kube-system kube-controller-manager-te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f 1/1 Running 0 3h
kube-system kube-proxy-te-ipopai2hb7-0-edce6decnmfd-kube-minion-guvv4egh2oeo 1/1 Running 0 3h
kube-system kube-proxy-te-ipopai2hb7-1-5z3wzug3hsjg-kube-minion-ktzk5ugdsclg 1/1 Running 0 3h
kube-system kube-proxy-te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f 1/1 Running 0 3h
kube-system kube-scheduler-te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f 1/1 Running 0 3h
kube-system kubernetes-dashboard-953129222-2m1fl 1/1 Running 0 3h

验证k8s功能

dashboard

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
## 查看dashboard svc
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl get svc -n kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.254.0.10 <none> 53/UDP,53/TCP 3h
kubernetes-dashboard 10.254.14.9 <nodes> 80:30029/TCP 3h
## 查看dashboard svc的定义
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f fedora]# kubectl get svc kubernetes-dashboard -n kube-system -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2017-09-20T07:22:11Z
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "40"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: 6af71858-9dd4-11e7-b8fc-fa163e86cfba
spec:
clusterIP: 10.254.14.9
ports:
- nodePort: 30029
port: 80
protocol: TCP
targetPort: 9090
selector:
app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
从上面可以看出,dashboard service使用的是nodeport方式,那我们可以使用
http://node_ip:30029 进行愉快的访问了

这就是我们k8s 的dashboard了

注释

dns

magnum k8s的dns服务默认使用coredns
具体实现可以参看coredns的官网文档

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
创建一个buybox pod
[root@master example]# cat busybox.yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: 172.16.130.151:4000/busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Alway
[root@master example]# kubectl create -f busybox.yaml
## 完美解析, nice
[root@master example]# kubectl exec busybox nslookup kubernetes
Server: 10.254.0.10
Address 1: 10.254.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.254.0.1 kubernetes.default.svc.cluster.local

k8s使用neutron lb功能

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
## 启动一个test-nginx-pod
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f examples]# cat test-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: 172.16.130.151:4000/nginx
ports:
- containerPort: 80
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f examples]# kubectl create -f test-pod.yaml
pod "nginx" created
## 启动test-service
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f examples]# cat test-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginxservice
labels:
app: nginx
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: nginx
type: LoadBalancer
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f examples]# kubectl create -f test-service.yaml
service "nginxservice" created
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f examples]# kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 4h
nginxservice 10.254.84.116 10.0.0.15 80:30443/TCP 2m
## 使用EXTERNAL-IP 访问服务
[root@te-lgs3rgroc5-0-dvcxdlqunz3h-kube-master-sespgox2v33f examples]# curl 10.0.0.15
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html

这时候我们可以去界面(horizon)上查看下lb创建情况

注释

创建成功我们给他分配一个浮动ip
注释
通过浮动ip访问应用
注释

坚持原创技术分享,您的支持将鼓励我继续创作!