Last updated: 23 Sep 21 05:15:24
Kube Install
튜토리얼
K8s를 설치를 하고, 구성은 마스터 1, 미니언노드 2 구성입니다. 네트워크는 외부 flannel를 사용합니다. DNS는 외부 coreDNS를 사용합니다. K8s 데쉬보드 UI 어플리케이션을 시작하고, glusterFS로 이용하여 외부 볼륨을 붙여봅니다. 그 후 MongDB 어플리케이션을 시작해봅니다.
환경
centos7 docker-1.13.1 -> 20.10.8 kubernetes 1.13 -> 1.22 dashboard 1.10 -> 2.3.1 flannel 0.14 kube-dns 1.14
k8s 설치
- 모든 노드 공통작업
vi /lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd-current \ --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \ --default-runtime=docker-runc \ --authorization-plugin=rhel-push-plugin \ --exec-opt native.cgroupdriver=systemd \ --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \ --init-path=/usr/libexec/docker/docker-init-current \ --seccomp-profile=/etc/docker/seccomp.json \ # docker ce 20.10.8에서는 따로 셋팅안함 # registry도 harbor를 사용하면 설정 필요없음. #-H tcp://0.0.0.0:4243 \ #-H unix:///var/run/docker.sock \ #--insecure-registry=172.16.15.241:5000 \ $OPTIONS \ $DOCKER_STORAGE_OPTIONS \ $DOCKER_NETWORK_OPTIONS \ $ADD_REGISTRY \ $BLOCK_REGISTRY \ $INSECURE_REGISTRY \ $REGISTRIES ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=1048576 LimitNPROC=1048576 LimitCORE=infinity TimeoutStartSec=0 Restart=on-abnormal KillMode=process > systemctl daemon-reload ;systemctl restart docker # k8s 이전버전 > vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local" ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS **$KUBELET_DNS_ARGS** $KUBELET_EXTRA_ARGS #k8s 최신 1.22 > vi /etc/kubernetes/manifests/kube-controller-manager.yaml - --cluster-cidr=102.31.0.0/16 - --service-cluster-ip-range=10.96.0.0/12 # 마스터 노드 > systemctl restart kubelet > systemctl enable kubelet > systemctl systemctl daemon-reload # 미니언 노드 > systemctl restart kubelet > systemctl enable kubelet > systemctl systemctl daemon-reload # 미니언 노드 > systemctl restart kubelet > systemctl enable kubelet > systemctl systemctl daemon-reload > vi /etc/sysctl.conf net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 [net.netfilter.nf_conntrack_max](http://net.netfilter.nf_conntrack_max) = 786432 # 마스터 노드 > kubeadm init --pod-network-cidr 102.31.0.0/16 --service-cidr 10.96.0.10/12 --service-dns-domain "cluster.local" --apiserver-advertise-address 172.16.15.140 --token-ttl 0 [init] Using Kubernetes version: v1.13.0 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubenet1 localhost] and IPs [172.16.15.140 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubenet1 localhost] and IPs [172.16.15.140 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubenet1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.10 172.16.15.140] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 20.502129 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubenet1" as an annotation [mark-control-plane] Marking the node kubenet1 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kubenet1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 2fdwmb.b8sr2ygbo1p99vq9 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy > mkdir -p $HOME/.kube > sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config > sudo chown $(id -u):$(id -g) $HOME/.kube/config > vi kube-flannel.yml "Network": "102.31.0.0/16", > kubectl create -f kube-flannel.yml > kubectl delete -n kube-system svc,pod coredns > kubectl create -f ... centosPod.yaml kubedns-cm.yaml kubedns-sa.yaml centosSvc.yaml kubedns-controller.yaml kubedns-svc.yaml hostnameSvc.yaml hostnamesDeploy.yaml # DNS 테스트 > kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm /bin/sh [root@test /]# ping kube-dns.kube-system.svc.cluster.local PING kube-dns.kube-system.svc.cluster.local (10.96.0.10) 56(84) bytes of data. From 1.213.94.6 (1.213.94.6) icmp_seq=1 Time to live exceeded From 1.213.94.6 (1.213.94.6) icmp_seq=2 Time to live exceeded #DNS RoundRobin 테스트 #TTL 15try.. [root@test /]# wget -O- http://hostnames --2018-12-13 06:46:36-- http://hostnames/ Resolving hostnames (hostnames)... 10.99.12.208 Connecting to hostnames (hostnames)|10.99.12.208|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 27 [text/plain] Saving to: 'STDOUT' 0% [ ] 0 --.-K/s h ostnames-57bf77cb5d-8xvmd 100%[==========================================================>] 27 --.-K/s in 0s 2018-12-13 06:46:36 (5.01 MB/s) - written to stdout [27/27] [root@test /]# wget -O- http://hostnames --2018-12-13 06:46:36-- http://hostnames/ Resolving hostnames (hostnames)... 10.99.12.208 Connecting to hostnames (hostnames)|10.99.12.208|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 27 [text/plain] Saving to: 'STDOUT' 0% [ ] 0 --.-K/s h ostnames-57bf77cb5d-jlsgr 100%[==========================================================>] 27 --.-K/s in 0s 2018-12-13 06:46:36 (4.74 MB/s) - written to stdout [27/27] [root@test /]# wget -O- http://hostnames --2018-12-13 06:46:37-- http://hostnames/ Resolving hostnames (hostnames)... 10.99.12.208 Connecting to hostnames (hostnames)|10.99.12.208|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 27 [text/plain] Saving to: 'STDOUT' 0% [ ] 0 --.-K/s $ ostnames-57bf77cb5d-lqr6w 100%[==========================================================>] 27 --.-K/s in 0s 2018-12-13 06:46:37 (4.59 MB/s) - written to stdout [27/27] > kubeadm join 172.16.15.140:6443 --token 2fdwmb.b8sr2ygbo1p99vq9 --discovery-token-ca-cert-hash sha256:5e2ac1fff395cebd7a7cbbd3c25ce2936d8f8b918507e6c38a08e43b541122e5 > kubeadm join 172.16.15.140:6443 --token 2fdwmb.b8sr2ygbo1p99vq9 --discovery-token-ca-cert-hash sh a256:5e2ac1fff395cebd7a7cbbd3c25ce2936d8f8b918507e6c38a08e43b541122e5 > kubeadm join 172.16.15.140:6443 --token 2fdwmb.b8sr2ygbo1p99vq9 --discovery-token-ca-cert-hash sh a256:5e2ac1fff395cebd7a7cbbd3c25ce2936d8f8b918507e6c38a08e43b541122e5 ############################################################################# # k8s 심플 데쉬보드 생성 > kubectl create -f kubernetes-dashboard.yaml # 옛날 버전 1.13 > cat <<EOF | kubectl --kubeconfig=**/root/.kube/config** create -f - apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system EOF > kubectl edit service kubernetes-dashboard -n kube-system # 외부 접속 가능하게 nodePort 변경 kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: type: NodePort ports: - nodePort: 32469 port: 8443 selector: k8s-app: kubernetes-dashboard # 실제 외부 포트 확인 > kubectl get services kubernetes-dashboard -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.102.116.29 <none> 443:32469/TCP 8m54s # 접속 토큰 값 확인 > kubectl describe serviceaccount kubernetes-dashboard -n kube-system Name: kubernetes-dashboard Namespace: kube-system Labels: k8s-app=kubernetes-dashboard Annotations: <none> Image pull secrets: <none> Mountable secrets: kubernetes-dashboard-token-s2kmd Tokens: kubernetes-dashboard-token-s2kmd Events: <none> # 토큰 값 확인 후 복사 kubectl describe secrets kubernetes-dashboard-token-s2kmd -n kube-system Name: kubernetes-dashboard-token-s2kmd Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: kubernetes-dashboard kubernetes.io/service-account.uid: e22caef4-fea6-11e8-b47c-005056bafe92 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1zMmttZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImUyMmNhZWY0LWZlYTYtMTFlOC1iNDdjLTAwNTA1NmJhZmU5MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.kr25R9IdyRAZjhTzbeaoBofnd67jmdluUDK5djNXp_Lgv9vfv2SzDUUML6RWrXwGv96zyhch-qtBTrbYqpMZpNh75bqmCUsUdCSKBNT1rRKuJRu96nag4krBwj9VGUu8njBTfdM9wG3qoB2xcmOkPffJSfzvaMe64Zp_QK6ZaX81pQeiAn3p0rp5dUQBRh38Ne101pVYScXmzUz1mhz7cIbI-seTJWKb-z2KTsBCcU98lQafUw19aDPLhvjPjVDCd4tLNr7XZb3jyuPD7beFKMUL1OYX35Z-0DMV5AMGuio6cmxNezaoOWhXFaRLljwoHAMU841SUDNa1wxNBWg8xQ # k8s 데쉬보드 웹 접속 https://172.16.15.140:32469/#!/login # k8s 모니터 > kubectl create -f influxdb.yaml > kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 99m kubernetes-dashboard NodePort 10.102.116.29 <none> 443:32469/TCP 38m monitoring-influxdb ClusterIP 10.99.42.54 <none> 8086/TCP 13m > netstat -anp | grep LIST| grep kubelet tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 28446/kubelet tcp 0 0 127.0.0.1:41539 0.0.0.0:* LISTEN 28446/kubelet tcp6 0 0 :::10250 :::* LISTEN 28446/kubelet vi heapster.yaml - --source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true - --sink=influxdb:http://monitoring-influxdb:8086 > kubectl create -f heapster.yaml # k8s 외부 볼륨 리소스 할당 (글러스터 fs 활용) # 외부 서버에서 작업 > yum install centos-release-gluster41.noarch > yum install glusterfs-server.x86_64 > systemctl start glusterd > systemctl enable glusterd > mkdir -p /data/kube_vol > chmod 777 /data/kube_vol > gluster volume create kube_vol transport tcp 172.16.15.241:/data/kube_vol force > gluster volume start kube_vol # 볼륨 확인 > gluster volume info all # 마스터 서버 # persistentVolumeClaim Volume > kubectl create -f ... glusterfs-endpoints.json glusterfs-pod.json glusterfs-service.json pv.yaml pvs.yaml # Jenkins 서버 설치 > kubectl create -f ... jenkins.yaml jenkinsSvc.yaml > kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE glusterfs-cluster ClusterIP 10.102.89.237 <none> 24007/TCP 22m hostnames ClusterIP 10.99.12.208 <none> 80/TCP 111m jenkins-master NodePort 10.98.72.50 <none> 50000:31977/TCP,8080:31695/TCP 21s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8h test ClusterIP 10.96.120.54 <none> 80/TCP 3h56m # 젠킨스 서버 포탈 주소 http://172.16.15.140:31695/ # podAntiAffinity 에러 방지 annotations: > { "podAntiAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": [ { "topologyKey": "kubernetes.io/hostname" } ] } } # MongoDB 서버 설치 > kubectl create secret generic shared-bootstrap-data --from-file=internal-auth-mongodb-keyfile=mongodb-keyfile secret/shared-bootstrap-data created > kubectl create -f mongodb-service.yaml service/mongodb-svc created service/mongodb-hs created statefulset.apps/mongod-ss created > ./02-configure_repset_auth.sh vmware1! Configuring the MongoDB Replica Set MongoDB shell version v3.4.18 connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 3.4.18 { "ok" : 1 } Waiting for the MongoDB Replica Set to initialise... MongoDB shell version v3.4.18 connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 3.4.18 . ...initialisation of MongoDB Replica Set completed Creating user: 'main_admin' MongoDB shell version v3.4.18 connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 3.4.18 Successfully added user: { "user" : "main_admin", "roles" : [ { "role" : "root", "db" : "admin" } ] }
vi /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd-current \
--add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
--default-runtime=docker-runc \
--authorization-plugin=rhel-push-plugin \
--exec-opt native.cgroupdriver=systemd \
--userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
--init-path=/usr/libexec/docker/docker-init-current \
--seccomp-profile=/etc/docker/seccomp.json \
# docker ce 20.10.8에서는 따로 셋팅안함
# registry도 harbor를 사용하면 설정 필요없음.
#-H tcp://0.0.0.0:4243 \
#-H unix:///var/run/docker.sock \
#--insecure-registry=172.16.15.241:5000 \
$OPTIONS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$ADD_REGISTRY \
$BLOCK_REGISTRY \
$INSECURE_REGISTRY \
$REGISTRIES
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
Restart=on-abnormal
KillMode=process
> systemctl daemon-reload ;systemctl restart docker
# k8s 이전버전
> vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS **$KUBELET_DNS_ARGS** $KUBELET_EXTRA_ARGS
#k8s 최신 1.22
> vi /etc/kubernetes/manifests/kube-controller-manager.yaml
- --cluster-cidr=102.31.0.0/16
- --service-cluster-ip-range=10.96.0.0/12
# 마스터 노드
> systemctl restart kubelet
> systemctl enable kubelet
> systemctl systemctl daemon-reload
# 미니언 노드
> systemctl restart kubelet
> systemctl enable kubelet
> systemctl systemctl daemon-reload
# 미니언 노드
> systemctl restart kubelet
> systemctl enable kubelet
> systemctl systemctl daemon-reload
> vi /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
[net.netfilter.nf_conntrack_max](http://net.netfilter.nf_conntrack_max) = 786432
# 마스터 노드
> kubeadm init --pod-network-cidr 102.31.0.0/16 --service-cidr 10.96.0.10/12 --service-dns-domain "cluster.local" --apiserver-advertise-address 172.16.15.140 --token-ttl 0
[init] Using Kubernetes version: v1.13.0
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubenet1 localhost] and IPs [172.16.15.140 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubenet1 localhost] and IPs [172.16.15.140 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubenet1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.10 172.16.15.140]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.502129 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubenet1" as an annotation
[mark-control-plane] Marking the node kubenet1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubenet1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 2fdwmb.b8sr2ygbo1p99vq9
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
> mkdir -p $HOME/.kube
> sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
> sudo chown $(id -u):$(id -g) $HOME/.kube/config
> vi kube-flannel.yml
"Network": "102.31.0.0/16",
> kubectl create -f kube-flannel.yml
> kubectl delete -n kube-system svc,pod coredns
> kubectl create -f ...
centosPod.yaml kubedns-cm.yaml kubedns-sa.yaml
centosSvc.yaml kubedns-controller.yaml kubedns-svc.yaml
hostnameSvc.yaml hostnamesDeploy.yaml
# DNS 테스트
> kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm /bin/sh
[root@test /]# ping kube-dns.kube-system.svc.cluster.local
PING kube-dns.kube-system.svc.cluster.local (10.96.0.10) 56(84) bytes of data.
From 1.213.94.6 (1.213.94.6) icmp_seq=1 Time to live exceeded
From 1.213.94.6 (1.213.94.6) icmp_seq=2 Time to live exceeded
#DNS RoundRobin 테스트
#TTL 15try..
[root@test /]# wget -O- http://hostnames
--2018-12-13 06:46:36-- http://hostnames/
Resolving hostnames (hostnames)... 10.99.12.208
Connecting to hostnames (hostnames)|10.99.12.208|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 27 [text/plain]
Saving to: 'STDOUT'
0% [ ] 0 --.-K/s h
ostnames-57bf77cb5d-8xvmd
100%[==========================================================>] 27 --.-K/s in 0s
2018-12-13 06:46:36 (5.01 MB/s) - written to stdout [27/27]
[root@test /]# wget -O- http://hostnames
--2018-12-13 06:46:36-- http://hostnames/
Resolving hostnames (hostnames)... 10.99.12.208
Connecting to hostnames (hostnames)|10.99.12.208|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 27 [text/plain]
Saving to: 'STDOUT'
0% [ ] 0 --.-K/s h
ostnames-57bf77cb5d-jlsgr
100%[==========================================================>] 27 --.-K/s in 0s
2018-12-13 06:46:36 (4.74 MB/s) - written to stdout [27/27]
[root@test /]# wget -O- http://hostnames
--2018-12-13 06:46:37-- http://hostnames/
Resolving hostnames (hostnames)... 10.99.12.208
Connecting to hostnames (hostnames)|10.99.12.208|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 27 [text/plain]
Saving to: 'STDOUT'
0% [ ] 0 --.-K/s $
ostnames-57bf77cb5d-lqr6w
100%[==========================================================>] 27 --.-K/s in 0s
2018-12-13 06:46:37 (4.59 MB/s) - written to stdout [27/27]
> kubeadm join 172.16.15.140:6443 --token 2fdwmb.b8sr2ygbo1p99vq9 --discovery-token-ca-cert-hash sha256:5e2ac1fff395cebd7a7cbbd3c25ce2936d8f8b918507e6c38a08e43b541122e5
> kubeadm join 172.16.15.140:6443 --token 2fdwmb.b8sr2ygbo1p99vq9 --discovery-token-ca-cert-hash sh
a256:5e2ac1fff395cebd7a7cbbd3c25ce2936d8f8b918507e6c38a08e43b541122e5
> kubeadm join 172.16.15.140:6443 --token 2fdwmb.b8sr2ygbo1p99vq9 --discovery-token-ca-cert-hash sh
a256:5e2ac1fff395cebd7a7cbbd3c25ce2936d8f8b918507e6c38a08e43b541122e5
#############################################################################
# k8s 심플 데쉬보드 생성
> kubectl create -f kubernetes-dashboard.yaml
# 옛날 버전 1.13
> cat <<EOF | kubectl --kubeconfig=**/root/.kube/config** create -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
EOF
> kubectl edit service kubernetes-dashboard -n kube-system
# 외부 접속 가능하게 nodePort 변경
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- nodePort: 32469
port: 8443
selector:
k8s-app: kubernetes-dashboard
# 실제 외부 포트 확인
> kubectl get services kubernetes-dashboard -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.102.116.29 <none> 443:32469/TCP 8m54s
# 접속 토큰 값 확인
> kubectl describe serviceaccount kubernetes-dashboard -n kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: kubernetes-dashboard-token-s2kmd
Tokens: kubernetes-dashboard-token-s2kmd
Events: <none>
# 토큰 값 확인 후 복사
kubectl describe secrets kubernetes-dashboard-token-s2kmd -n kube-system
Name: kubernetes-dashboard-token-s2kmd
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard
kubernetes.io/service-account.uid: e22caef4-fea6-11e8-b47c-005056bafe92
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1zMmttZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImUyMmNhZWY0LWZlYTYtMTFlOC1iNDdjLTAwNTA1NmJhZmU5MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.kr25R9IdyRAZjhTzbeaoBofnd67jmdluUDK5djNXp_Lgv9vfv2SzDUUML6RWrXwGv96zyhch-qtBTrbYqpMZpNh75bqmCUsUdCSKBNT1rRKuJRu96nag4krBwj9VGUu8njBTfdM9wG3qoB2xcmOkPffJSfzvaMe64Zp_QK6ZaX81pQeiAn3p0rp5dUQBRh38Ne101pVYScXmzUz1mhz7cIbI-seTJWKb-z2KTsBCcU98lQafUw19aDPLhvjPjVDCd4tLNr7XZb3jyuPD7beFKMUL1OYX35Z-0DMV5AMGuio6cmxNezaoOWhXFaRLljwoHAMU841SUDNa1wxNBWg8xQ
# k8s 데쉬보드 웹 접속
https://172.16.15.140:32469/#!/login
# k8s 모니터
> kubectl create -f influxdb.yaml
> kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 99m
kubernetes-dashboard NodePort 10.102.116.29 <none> 443:32469/TCP 38m
monitoring-influxdb ClusterIP 10.99.42.54 <none> 8086/TCP 13m
> netstat -anp | grep LIST| grep kubelet
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 28446/kubelet
tcp 0 0 127.0.0.1:41539 0.0.0.0:* LISTEN 28446/kubelet
tcp6 0 0 :::10250 :::* LISTEN 28446/kubelet
vi heapster.yaml
- --source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true
- --sink=influxdb:http://monitoring-influxdb:8086
> kubectl create -f heapster.yaml
# k8s 외부 볼륨 리소스 할당 (글러스터 fs 활용)
# 외부 서버에서 작업
> yum install centos-release-gluster41.noarch
> yum install glusterfs-server.x86_64
> systemctl start glusterd
> systemctl enable glusterd
> mkdir -p /data/kube_vol
> chmod 777 /data/kube_vol
> gluster volume create kube_vol transport tcp 172.16.15.241:/data/kube_vol force
> gluster volume start kube_vol
# 볼륨 확인
> gluster volume info all
# 마스터 서버
# persistentVolumeClaim Volume
> kubectl create -f ...
glusterfs-endpoints.json glusterfs-pod.json glusterfs-service.json pv.yaml pvs.yaml
# Jenkins 서버 설치
> kubectl create -f ...
jenkins.yaml jenkinsSvc.yaml
> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
glusterfs-cluster ClusterIP 10.102.89.237 <none> 24007/TCP 22m
hostnames ClusterIP 10.99.12.208 <none> 80/TCP 111m
jenkins-master NodePort 10.98.72.50 <none> 50000:31977/TCP,8080:31695/TCP 21s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8h
test ClusterIP 10.96.120.54 <none> 80/TCP 3h56m
# 젠킨스 서버 포탈 주소
http://172.16.15.140:31695/
# podAntiAffinity 에러 방지
annotations: >
{
"podAntiAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": [
{
"topologyKey": "kubernetes.io/hostname"
}
]
}
}
# MongoDB 서버 설치
> kubectl create secret generic shared-bootstrap-data --from-file=internal-auth-mongodb-keyfile=mongodb-keyfile
secret/shared-bootstrap-data created
> kubectl create -f mongodb-service.yaml
service/mongodb-svc created
service/mongodb-hs created
statefulset.apps/mongod-ss created
> ./02-configure_repset_auth.sh vmware1!
Configuring the MongoDB Replica Set
MongoDB shell version v3.4.18
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.18
{ "ok" : 1 }
Waiting for the MongoDB Replica Set to initialise...
MongoDB shell version v3.4.18
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.18
.
...initialisation of MongoDB Replica Set completed
Creating user: 'main_admin'
MongoDB shell version v3.4.18
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.18
Successfully added user: {
"user" : "main_admin",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}
'IT > kubernetes' 카테고리의 다른 글
ISTIO 이스티오 설치 (0) | 2021.09.23 |
---|---|
쿠버네티스 버전 업그레이드 (0) | 2021.09.23 |
프라이빗 도커 레파지토리 Helm에서 Harbor로 설치 (0) | 2021.09.23 |