이야기박스
Kubernetes 설치 본문
지난번 Kubernetes Introduction에 이어서 쿠버네티스 설치를 진행해보았습니다.
GCP의 Free tier로 Kubernetes Engine을 사용할 수도 있지만, 직접설치도 한번 해보면 구조를 잘 이해할 수 있을 것 같아서 설치를 진행하였습니다.
설치는 쿠버네티스 공식 문서를 참조하여 진행하였습니다.
# 서버 구성
지난번 Mesos 구성할때처럼 VM으로 진행하였습니다.
아래는 쿠버네티스 공홈에서 제시한 요구사항입니다. 여기에 맞추어서 구성해주면 됩니다.
- 2 GB or more of RAM per machine (any less will leave little room for your apps)
- 2 CPUs or more
- Full network connectivity between all machines in the cluster (public or private network is fine)
- Unique hostname, MAC address, and product_uuid for every node. See here for more details.
- Certain ports are open on your machines. See here for more details.
- Swap disabled. You MUST disable swap in order for the kubelet to work properly.
https://box0830.tistory.com/255
위 조건을 맞추어서 서버를 생성해줍니다.
생성한 서버에서는 다음 포트는 열려있어야 서비스 동작이 원할합니다.
## Master Node(s)
Protocol | Direction | Port Range | Purpose | Used By |
TCP | Inbound | 6443* | Kubernetes API server | All |
TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
TCP | Inbound | 10251 | kube-scheduler | Self |
TCP | Inbound | 10252 | kube-controller-manager | Self |
## Worker Node(s)
Protocol | Direction | Port Range | Purpose | Used By |
TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
TCP | Inbound | 30000-32767 | NodePort Services** | All |
이후 추가로 Routing 작업을 진행해주셔야 합니다.
위 VM 포스팅을 따라하다보면 default route가 NAT 네트워크로 등록이 되게 됩니다. 그렇기 때문에 추가적으로 default route를 추가해주어야 합니다.
기존 /etc/network/interfaces 내, enp0s8 adaptor 스크립트를 아래와 같이 바꾸어 줍니다.
# The Host-only network interface
auto enp0s8
iface enp0s8 inet static
address 192.168.56.101
netmask 255.255.255.0
post-up route add default netmask 255.255.255.0 gw 192.168.56.1 dev enp0s8
그리고 네트워크를 재시작해준 후, 라우팅을 확인해봅니다.
sudo ifdown enp0s8 && sudo ifup enp0s8
route
저 같은 경우는 아래처럼 나왔습니다.
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.56.1 255.255.255.0 UG 0 0 0 enp0s8
default 10.0.2.2 0.0.0.0 UG 0 0 0 enp0s3
10.0.2.0 * 255.255.255.0 U 0 0 0 enp0s3
192.168.56.0 * 255.255.255.0 U 0 0 0 enp0s8
# Master, Worker 공통 부분
## kubeadm, kubelet, kubectl
각 컴퍼넌트는 다음의 내용을 담고 있습니다.
- kubeadm : the command to bootstrap the cluster
- kubelet : the component that runs on all of the machines in your cluster and does things like starting pods and containers
- kubectl : the command line util to talk to your cluster
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
## cgroup driver
지난번 introduction에서 설명했듯이 컨테이너는 Linux Namespaces, cgroups 두 가지 매커니즘을 통하여 프로세스를 격리, 동작하게 합니다. 기본적으로 kubeadm이 자동으로 cgroup driver를 찾아주지만, 다른 CRI를 사용한다면 수동으로 이를 /etc/default/kubelet에 등록해주어야 합니다.
KUBELET_EXTRA_ARGS=--cgroup-driver=<value>
등록이 완료되었다면 서비스를 재시작해줍니다.
systemctl daemon-reload
systemctl restart kubelet
하지만 전 설치 테스트다보니 생략하고 진행하였습니다. (생략하여도 warning만 뜨기 때문에 설치 진행이 가능합니다.)
## docker
apt-get update
apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable"
apt-get update && apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}')
# Master
## Initializing master(s)
기본 동작은 init 명령어 + args로 구성됩니다. args에 대한 정보들은 여기 링크를 참조하시면 됩니다.
kubeadm init <args>
우선 빠르게 설치를 진행하기 위하여 args를 생략하고 명령어를 실행하였습니다.
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [jw-ubuntu01 localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [jw-ubuntu01 localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [jw-ubuntu01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.101]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.002800 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node jw-ubuntu01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node jw-ubuntu01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: uwfy5f.lcqud5r3slxusb5z
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.101:6443 --token uwfy5f.lcqud5r3slxusb5z \
--discovery-token-ca-cert-hash sha256:dd2f573f878a892ac92e97e29263f0f79d1d0b050dc015020499d9838cb1107f
## kubectl work
kubectl을 사용하기 위하여 아래와 같이 환경 변수를 등록해줍니다. 저는 root 유저로 사용하여서 해당 방법을 올립니다.
export KUBECONFIG=/etc/kubernetes/admin.conf
## pod network add-on
pod간 통신이 가능하도록 하기 위하여 pod network add-on을 설치하는 작업을 진행하겠습니다.
The network must be deployed before any applications. Also, CoreDNS will not start up before a network is installed. kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).
문서를 보면 네트워크는 어플리케이션보다 먼저 설치되어야 하고, CNI based의 네트워크만을 지원한다고 나와 있습니다. 여러 종류의 CNI 네트워크 프로젝트들이 있고 저는 그 중에서 Calico라는 pod network add-on 서비스를 사용하였습니다.
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
위 커맨드를 실행하면 자동으로 추가 됩니다.
## Control plane node isolation
기본적으로 마스터 노드에는 pod를 스케줄링하지 않기 때문에 격리작업이 필요합니다.
kubectl taint nodes --all node-role.kubernetes.io/master-
위 커맨드를 실행하면 아래와 같이 응답이 오는데, 이렇게 하면 모든 노드에서 node-role.kubernetes.io/master가 제거된다고 하네요.
node "jw-ubuntu01" untainted
taint "node-role.kubernetes.io/master:" not found
taint "node-role.kubernetes.io/master:" not found
# Worker
공통 설치 부분을 완료 한 후, Master Node에 조인을 하면 됩니다.
위에 마스터 노드에서 "kubeadm init"을 실행하면, 맨 아래에 token 정보와 join을 할 수 있는 command가 함께 나오는데 그걸 루트 계정에서 그대로 사용하시면 됩니다.
kubeadm join 192.168.56.101:6443 --token uwfy5f.lcqud5r3slxusb5z \
--discovery-token-ca-cert-hash sha256:dd2f573f878a892ac92e97e29263f0f79d1d0b050dc015020499d9838cb1107f
# 확인 명령어
## node 조회
kubectl get nodes
NAME STATUS ROLES AGE VERSION
jw-ubuntu01 Ready master 4h36m v1.14.1
jw-ubuntu02 Ready <none> 4h35m v1.14.1
## 상태 조회
kubectl describe nodes
Name: jw-ubuntu01
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=jw-ubuntu01
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.56.101/24
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 30 Apr 2019 12:13:22 +0900
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 30 Apr 2019 14:08:27 +0900 Tue, 30 Apr 2019 12:13:18 +0900 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 30 Apr 2019 14:08:27 +0900 Tue, 30 Apr 2019 12:13:18 +0900 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 30 Apr 2019 14:08:27 +0900 Tue, 30 Apr 2019 12:13:18 +0900 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 30 Apr 2019 14:08:27 +0900 Tue, 30 Apr 2019 12:35:26 +0900 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 192.168.56.101
Hostname: jw-ubuntu01
Capacity:
cpu: 2
ephemeral-storage: 39423400Ki
hugepages-2Mi: 0
memory: 4046276Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 36332605380
hugepages-2Mi: 0
memory: 3943876Ki
pods: 110
System Info:
Machine ID: d77c67060a1d192beddeeabc5cc7b253
System UUID: B98F7158-E6AE-4410-B439-E49ED40D380D
Boot ID: be7699d1-59aa-4f50-8271-98e8821db83b
Kernel Version: 4.4.0-142-generic
OS Image: Ubuntu 16.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.3.3
Kubelet Version: v1.14.1
Kube-Proxy Version: v1.14.1
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system calico-node-4547b 250m (12%) 0 (0%) 0 (0%) 0 (0%) 94m
kube-system coredns-fb8b8dccf-jpbb6 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 115m
kube-system coredns-fb8b8dccf-nzcfw 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 115m
kube-system etcd-jw-ubuntu01 0 (0%) 0 (0%) 0 (0%) 0 (0%) 114m
kube-system kube-apiserver-jw-ubuntu01 250m (12%) 0 (0%) 0 (0%) 0 (0%) 115m
kube-system kube-controller-manager-jw-ubuntu01 200m (10%) 0 (0%) 0 (0%) 0 (0%) 114m
kube-system kube-proxy-jlf86 0 (0%) 0 (0%) 0 (0%) 0 (0%) 115m
kube-system kube-scheduler-jw-ubuntu01 100m (5%) 0 (0%) 0 (0%) 0 (0%) 114m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1 (50%) 0 (0%)
memory 140Mi (3%) 340Mi (8%)
ephemeral-storage 0 (0%) 0 (0%)
Events: <none>
Name: jw-ubuntu02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=jw-ubuntu02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 30 Apr 2019 12:14:10 +0900
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 30 Apr 2019 14:09:12 +0900 Tue, 30 Apr 2019 12:14:10 +0900 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 30 Apr 2019 14:09:12 +0900 Tue, 30 Apr 2019 12:14:10 +0900 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 30 Apr 2019 14:09:12 +0900 Tue, 30 Apr 2019 12:14:10 +0900 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 30 Apr 2019 14:09:12 +0900 Tue, 30 Apr 2019 12:35:33 +0900 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 10.0.2.15
Hostname: jw-ubuntu02
Capacity:
cpu: 1
ephemeral-storage: 40168028Ki
hugepages-2Mi: 0
memory: 2048092Ki
pods: 110
Allocatable:
cpu: 1
ephemeral-storage: 37018854544
hugepages-2Mi: 0
memory: 1945692Ki
pods: 110
System Info:
Machine ID: f4dcd0ea2adf2134d81e57e25cbed453
System UUID: 24BB554F-8B21-4C33-B24F-20D15D0CDF80
Boot ID: 707c65ab-b03b-427b-bf11-22842ed52b13
Kernel Version: 4.4.0-142-generic
OS Image: Ubuntu 16.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.3.3
Kubelet Version: v1.14.1
Kube-Proxy Version: v1.14.1
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system calico-node-vvbhg 250m (25%) 0 (0%) 0 (0%) 0 (0%) 94m
kube-system kube-proxy-5gbdx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 115m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 250m (25%) 0 (0%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
Events: <none>
## pod 조회
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-4547b 2/2 Running 0 97m
kube-system calico-node-vvbhg 1/2 CrashLoopBackOff 29 97m
kube-system coredns-fb8b8dccf-jpbb6 0/1 ContainerCreating 0 118m
kube-system coredns-fb8b8dccf-nzcfw 0/1 ContainerCreating 0 118m
kube-system etcd-jw-ubuntu01 1/1 Running 0 117m
kube-system kube-apiserver-jw-ubuntu01 1/1 Running 0 117m
kube-system kube-controller-manager-jw-ubuntu01 1/1 Running 0 117m
kube-system kube-proxy-5gbdx 1/1 Running 0 117m
kube-system kube-proxy-jlf86 1/1 Running 0 118m
kube-system kube-scheduler-jw-ubuntu01 1/1 Running 0 117m
'Computer & Data > Orchestration' 카테고리의 다른 글
Kubernetes 3. Pod (0) | 2019.05.18 |
---|---|
Kubernetes 2. Docker (0) | 2019.05.06 |
YARN ; Yet Another Resource Negotiator (0) | 2019.04.26 |
Kubernetes vs Mesos with Marathon (0) | 2019.04.25 |
Mesos; Master, Slave, Marathon 설치 (0) | 2019.04.24 |