ããŒã1/3ã¯ãã¡ã ã
ããŒã2/3ã¯ãã¡ã ã
ã¿ãªããããã«ã¡ã¯ïŒ ãããŠããããKubernetesãã¢ã¡ã¿ã«ãã¥ãŒããªã¢ã«ã®3çªç®ã®ããŒãã§ãïŒ ã¯ã©ã¹ã¿ãŒã®ç£èŠãšãã°ã®åéã«æ³šæãæããŸãããŸããäºåæ§æãããã¯ã©ã¹ã¿ãŒã³ã³ããŒãã³ãã䜿çšãããã¹ãã¢ããªã±ãŒã·ã§ã³ãèµ·åããŸãã 次ã«ãããã€ãã®ã¹ãã¬ã¹ãã¹ããå®è¡ãããã®ã¯ã©ã¹ã¿ãŒã¹ããŒã ã®å®å®æ§ã確èªããŸãã
Kubernetesã³ãã¥ããã£ãWebããŒã¹ã®ã€ã³ã¿ãŒãã§ãŒã¹ãæäŸããã¯ã©ã¹ã¿ãŒçµ±èšãååŸããããã«æäŸããæã人æ°ã®ããããŒã«ã¯Kubernetes Dashboardã§ãã å®éããŸã éçºäžã§ãããçŸåšã§ããã¢ããªã±ãŒã·ã§ã³ã®ãã©ãã«ã·ã¥ãŒãã£ã³ã°ãã¯ã©ã¹ã¿ãŒãªãœãŒã¹ã®ç®¡çã®ããã®è¿œå ããŒã¿ãæäŸã§ããŸãã
ãããã¯ã¯éšåçã«ç©è°ãéžããŠããŸãã ã¯ã©ã¹ã¿ã管çããããã«äœããã®Webã€ã³ã¿ãŒãã§ã€ã¹ãå¿
èŠãªã®ã¯æ¬åœã§ããã ãããšãkubectlã³ã³ãœãŒã«ããŒã«ã䜿çšããã®ã«ååã§ããïŒ ãŸããæã«ã¯ãããã®ãªãã·ã§ã³ã¯äºãã«è£å®ããŸãã
Kubernetesããã·ã¥ããŒããå±éããŠèŠãŠã¿ãŸãããã æšæºå±éã§ã¯ããã®ããã·ã¥ããŒãã¯ããŒã«ã«ãã¹ãã¢ãã¬ã¹ã§ã®ã¿éå§ãããŸãã ãããã£ãŠã æ¡åŒµã«ã¯kubectlãããã·ã³ãã³ãã䜿çšããå¿
èŠããããŸãããããŒã«ã«ã®kubectlå¶åŸ¡ããã€ã¹ã§ã®ã¿äœ¿çšå¯èœã§ãã ã»ãã¥ãªãã£ã®èŠ³ç¹ããã¯æªããããŸãããããã©ãŠã¶ãŒã®ã¯ã©ã¹ã¿ãŒå€ã§ã¢ã¯ã»ã¹ãããã®ã§ãããã€ãã®ãªã¹ã¯ãè² ãæºåãã§ããŠããŸãïŒçµå±ãæå¹ãªããŒã¯ã³ãæã€sslã䜿çšãããŸãïŒã
ç§ã®æ¹æ³ãé©çšããã«ã¯ããµãŒãã¹ã»ã¯ã·ã§ã³ã§æšæºã®å±éãã¡ã€ã«ããããã«å€æŽããå¿
èŠããããŸãã ãªãŒãã³ã¢ãã¬ã¹ã§ãã®ããã·ã¥ããŒããéãã«ã¯ãããŒããã©ã³ãµãŒã䜿çšããŸãã
èšå®æžã¿ã®kubectlãŠãŒãã£ãªãã£ã䜿çšããŠãã·ã³ã·ã¹ãã ã«å
¥ãã以äžãäœæããŸãã
control# vi kube-dashboard.yaml # Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ------------------- Dashboard Secret ------------------- # apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-system type: Opaque --- # ------------------- Dashboard Service Account ------------------- # apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Role & Role Binding ------------------- # kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubernetes-dashboard-minimal namespace: kube-system rules: # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret. - apiGroups: [""] resources: ["secrets"] verbs: ["create"] # Allow Dashboard to create 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] verbs: ["create"] # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics from heapster. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubernetes-dashboard-minimal namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard-minimal subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Deployment ------------------- # kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: LoadBalancer ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard
次ã«å®è¡ããŸãïŒ
control# kubectl create -f kube-dashboard.yaml control# kubectl get svc --namespace=kube-system kubernetes-dashboard LoadBalancer 10.96.164.141 192.168.0.240 443:31262/TCP 8h
ã芧ã®ãšãããBNããã®ãµãŒãã¹ã«IP 192.168.0.240ãè¿œå ããŸããã https://192.168.0.240ãéããŠKubernetesããã·ã¥ããŒãã衚瀺ããŠã¿ãŠãã ãã ã
ã¢ã¯ã»ã¹ããã«ã¯ã2ã€ã®æ¹æ³ããããŸãã以åã«kubectlãã»ããã¢ãããããšãã«äœ¿çšãããã¹ã¿ãŒããŒãããadmin.conf
ãã¡ã€ã«ã䜿çšããããã»ãã¥ãªãã£ããŒã¯ã³ã§ç¹å¥ãªãµãŒãã¹ã¢ã«ãŠã³ããäœæããŸãã
管çè
ãŠãŒã¶ãŒãäœæããŸãããïŒ
control# vi kube-dashboard-admin.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system control# kubectl create -f kube-dashboard-admin.yaml serviceaccount/admin-user created clusterrolebinding.rbac.authorization.k8s.io/admin-user created
次ã«ãã·ã¹ãã ã«ãã°ã€ã³ããããã®ããŒã¯ã³ãå¿
èŠã§ãã
control# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') Name: admin-user-token-vfh66 Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: 3775471a-3620-11e9-9800-763fc8adcb06 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: erJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwna3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJr dWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VmLXRva2VuLXZmaDY2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZ XJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzNzc1NDcxYS0zNjIwLTExZTktOTgwMC03Nj NmYzhhZGNiMDYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.JICASwxAJHFX8mLoSikJU1tbij4Kq2pneqAt6QCcGUizFLeSqr2R5x339ZR8W4 9cIsbZ7hbhFXCATQcVuWnWXe2dgXP5KE8ZdW9uvq96rm_JvsZz0RZO03UFRf8Exsss6GLeRJ5uNJNCAr8No5pmRMJo-_4BKW4OFDFxvSDSS_ZJaLMqJ0LNpwH1Z09SfD8TNW7VZqax4zuTSMX_yVS ts40nzh4-_IxDZ1i7imnNSYPQa_Oq9ieJ56Q-xuOiGu9C3Hs3NmhwV8MNAcniVEzoDyFmx4z9YYcFPCDIoerPfSJIMFIWXcNlUTPSMRA-KfjSb_KYAErVfNctwOVglgCISA
ããŒã¯ã³ãã³ããŒããŠããã°ã€ã³ç»é¢ã®ããŒã¯ã³ãã£ãŒã«ãã«è²Œãä»ããŸãã
ã·ã¹ãã ã«å
¥ã£ãåŸãã¯ã©ã¹ã¿ãŒãããå°ã詳ãã調ã¹ãããšãã§ããŸãããã®ããŒã«ãæ°ã«å
¥ã£ãŠããŸãã
ã¯ã©ã¹ã¿ãŒã®ç£èŠã·ã¹ãã ãæ·±ããããã®æ¬¡ã®ã¹ãããã¯ã heapsterãã€ã³ã¹ããŒã«ããããšã§ã ã
Heapsterã§ã¯ãã³ã³ããã¯ã©ã¹ã¿ãç£èŠãã Kubernetes ïŒããŒãžã§ã³v1.0.6以éïŒã®ããã©ãŒãã³ã¹ãåæã§ããŸãã é©åãªãã©ãããã©ãŒã ãæäŸããŸãã
ãã®ããŒã«ã¯ãã³ã³ãœãŒã«ãä»ããŠã¯ã©ã¹ã¿ãŒã®äœ¿çšçµ±èšãæäŸããããŒãããã³ããŒã¹ãªãœãŒã¹ã«é¢ãã詳现æ
å ±ãKubernetesããã·ã¥ããŒãã«è¿œå ããŸãã
ãã¢ã¡ã¿ã«ã«ã€ã³ã¹ããŒã«ããã®ã¯ã»ãšãã©å°é£ã§ã¯ãªãã調æ»ãè¡ãå¿
èŠããããŸãããå
ã®ããŒãžã§ã³ã§ããŒã«ãæ©èœããªãçç±ã§ããã解決çãèŠã€ãããŸããã
ããã§ã¯ããã®ã¢ããªã³ãç¶ããŠæ¿èªããŸãããã
control# vi heapster.yaml apiVersion: v1 kind: ServiceAccount metadata: name: heapster namespace: kube-system --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: heapster namespace: kube-system spec: replicas: 1 template: metadata: labels: task: monitoring k8s-app: heapster spec: serviceAccountName: heapster containers: - name: heapster image: gcr.io/google_containers/heapster-amd64:v1.4.2 imagePullPolicy: IfNotPresent command: - /heapster - --source=kubernetes.summary_api:''?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true --- apiVersion: v1 kind: Service metadata: labels: task: monitoring # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) # If you are NOT using this as an addon, you should comment out this line. kubernetes.io/cluster-service: 'true' kubernetes.io/name: Heapster name: heapster namespace: kube-system spec: ports: - port: 80 targetPort: 8082 selector: k8s-app: heapster --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: heapster roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:heapster subjects: - kind: ServiceAccount name: heapster namespace: kube-system
ããã¯ãHeapsterã³ãã¥ããã£ããã®æãäžè¬çãªæšæºãããã€ãã¡ã€ã«ã§ããããããªéãããããŸããã¯ã©ã¹ã¿ãŒã§åäœããããã«ãheapsterãããã€ã®è¡ã source = ãã¯æ¬¡ã®ããã«å€æŽãããŸãã
--source=kubernetes.summary_api:''?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true
ãã®èª¬æã§ã¯ããããã®ãªãã·ã§ã³ããã¹ãŠèŠã€ããããšãã§ããŸãã kubeletããŒãã10250ã«å€æŽããSSL蚌ææžã®æ€èšŒãç¡å¹ã«ããŸããïŒå°ãåé¡ããããŸããïŒã
ãŸããHeapster RBACããŒã«ã®ããŒãçµ±èšãååŸããããã®ã¢ã¯ã»ã¹èš±å¯ãè¿œå ããå¿
èŠããããŸãã ããŒã«ã®æåŸã«æ¬¡ã®æ°è¡ãè¿œå ããŸãã
control# kubectl edit clusterrole system:heapster ...... ... - apiGroups: - "" resources: - nodes/stats verbs: - get
ãã®çµæãRBACã®åœ¹å²ã¯æ¬¡ã®ããã«ãªããŸãã
# Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: "2019-02-22T18:58:32Z" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:heapster resourceVersion: "6799431" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/system%3Aheapster uid: d99065b5-36d3-11e9-a7e6-763fc8adcb06 rules: - apiGroups: - "" resources: - events - namespaces - nodes - pods verbs: - get - list - watch - apiGroups: - extensions resources: - deployments verbs: - get - list - watch - apiGroups: - "" resources: - nodes/stats verbs: - get
OKãã³ãã³ããå®è¡ããŠheapsterãããã€ã¡ã³ããæ£åžžã«èµ·åããããšã確èªããŸãããã
control# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% kube-master1 183m 9% 1161Mi 60% kube-master2 235m 11% 1130Mi 59% kube-worker1 189m 4% 1216Mi 41% kube-worker2 218m 5% 1290Mi 44% kube-worker3 181m 4% 1305Mi 44%
ããŠãåºåã«äœããã®ããŒã¿ãããã°ããã¹ãŠãæ£ããè¡ãããŠããŸãã ããã·ã¥ããŒãããŒãžã«æ»ã£ãŠãçŸåšå©çšå¯èœãªæ°ãããã£ãŒãã確èªããŸãããã
ããããã¯ãã¯ã©ã¹ã¿ãŒããŒããããŒã¹ãªã©ã®ãªãœãŒã¹ã®å®éã®äœ¿çšç¶æ³ã远跡ã§ããŸãã
ããã§ååã§ãªãå Žåã¯ãInfluxDB + Grafanaãè¿œå ããŠçµ±èšãããã«æ¹åã§ããŸãã ããã«ãããç¬èªã®Grafanaããã«ãæç»ããæ©èœãè¿œå ãããŸãã
ãã®ããŒãžã§ã³ã®InfluxDB + Grafanaã€ã³ã¹ããŒã«ã¯Heapster GitããŒãžãã䜿çšããŸãããéåžžã©ããä¿®æ£ãè¡ããŸãã ãã§ã«ããŒãã¹ã¿ãŒãããã€ãæ§æããŠãããããGrafanaãšInfluxDBãè¿œå ããã ãã§ãæ¢åã®ããŒãã¹ã¿ãŒãããã€ãå€æŽããŠãã¡ããªãã¯ãInfluxã«é
眮ããããšãã§ããŸãã
ã§ã¯ãInfluxDBãšGrafanaã®å±éãäœæããŸãããã
control# vi influxdb.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: monitoring-influxdb namespace: kube-system spec: replicas: 1 template: metadata: labels: task: monitoring k8s-app: influxdb spec: containers: - name: influxdb image: k8s.gcr.io/heapster-influxdb-amd64:v1.5.2 volumeMounts: - mountPath: /data name: influxdb-storage volumes: - name: influxdb-storage emptyDir: {} --- apiVersion: v1 kind: Service metadata: labels: task: monitoring # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) # If you are NOT using this as an addon, you should comment out this line. kubernetes.io/cluster-service: 'true' kubernetes.io/name: monitoring-influxdb name: monitoring-influxdb namespace: kube-system spec: ports: - port: 8086 targetPort: 8086 selector: k8s-app: influxdb
次ã¯Grafanaã§ãããµãŒãã¹èšå®ãå€æŽããŠãMetaLBããŒããã©ã³ãµãŒãæå¹ã«ããGrafanaãµãŒãã¹ã®å€éšIPã¢ãã¬ã¹ãååŸããããšãå¿ããªãã§ãã ããã
control# vi grafana.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: monitoring-grafana namespace: kube-system spec: replicas: 1 template: metadata: labels: task: monitoring k8s-app: grafana spec: containers: - name: grafana image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4 ports: - containerPort: 3000 protocol: TCP volumeMounts: - mountPath: /etc/ssl/certs name: ca-certificates readOnly: true - mountPath: /var name: grafana-storage env: - name: INFLUXDB_HOST value: monitoring-influxdb - name: GF_SERVER_HTTP_PORT value: "3000" # The following env variables are required to make Grafana accessible via # the kubernetes api-server proxy. On production clusters, we recommend # removing these env variables, setup auth for grafana, and expose the grafana # service using a LoadBalancer or a public IP. - name: GF_AUTH_BASIC_ENABLED value: "false" - name: GF_AUTH_ANONYMOUS_ENABLED value: "true" - name: GF_AUTH_ANONYMOUS_ORG_ROLE value: Admin - name: GF_SERVER_ROOT_URL # If you're only using the API Server proxy, set this value instead: # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy value: / volumes: - name: ca-certificates hostPath: path: /etc/ssl/certs - name: grafana-storage emptyDir: {} --- apiVersion: v1 kind: Service metadata: labels: # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) # If you are NOT using this as an addon, you should comment out this line. kubernetes.io/cluster-service: 'true' kubernetes.io/name: monitoring-grafana name: monitoring-grafana namespace: kube-system spec: # In a production setup, we recommend accessing Grafana through an external Loadbalancer # or through a public IP. # type: LoadBalancer # You could also use NodePort to expose the service at a randomly-generated port # type: NodePort type: LoadBalancer ports: - port: 80 targetPort: 3000 selector: k8s-app: grafana
ãããŠããããäœæããŸãã
control# kubectl create -f influxdb.yaml deployment.extensions/monitoring-influxdb created service/monitoring-influxdb created control# kubectl create -f grafana.yaml deployment.extensions/monitoring-grafana created service/monitoring-grafana created
heapsterãããã€ã¡ã³ããå€æŽããInfluxDBæ¥ç¶ãè¿œå ããŸãã 1è¡ã ãè¿œå ããå¿
èŠããããŸãã
- --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
heapsterãããã€ãç·šéããŸãã
control# kubectl get deployments --namespace=kube-system NAME READY UP-TO-DATE AVAILABLE AGE coredns 2/2 2 2 49d heapster 1/1 1 1 2d12h kubernetes-dashboard 1/1 1 1 3d21h monitoring-grafana 1/1 1 1 115s monitoring-influxdb 1/1 1 1 2m18s control# kubectl edit deployment heapster --namespace=kube-system ... beginning bla bla bla spec: containers: - command: - /heapster - --source=kubernetes.summary_api:''?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086 image: gcr.io/google_containers/heapster-amd64:v1.4.2 imagePullPolicy: IfNotPresent .... end
GrafanaãµãŒãã¹ã®å€éšIPã¢ãã¬ã¹ãèŠã€ããŠããã®å
éšã®ã·ã¹ãã ã«ãã°ã€ã³ããŸãã
control# kubectl get svc --namespace=kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ..... some other services here monitoring-grafana LoadBalancer 10.98.111.200 192.168.0.241 80:32148/TCP 18m
ãã©ãŠã¶ã§http://192.168.0.241ãéããåããŠadmin / adminã®è³æ Œæ
å ±ã䜿çšããŸãã
ãã°ã€ã³ãããšããGrafanaã¯ç©ºã§ãããã幞ããªããšã«grafana.comããå¿
èŠãªããã·ã¥ããŒãããã¹ãŠååŸã§ããŸãã ããã«çªå·3649ããã³3646ãã€ã³ããŒãããå¿
èŠããããŸããã€ã³ããŒãæã«ãæ£ããããŒã¿ãœãŒã¹ãéžæããŸãã
ãã®åŸãããŒããšçåºã®ãªãœãŒã¹ã®äœ¿çšãç£èŠãããã¡ããç¬èªã®ããã·ã¥ããŒããäœæããŸãã
ããŠãä»ã®ãšãããç£èŠãçµäºããŸãããã å¿
èŠã«ãªãå¯èœæ§ããã次ã®èŠçŽ ã¯ãã¢ããªã±ãŒã·ã§ã³ãšã¯ã©ã¹ã¿ãŒãä¿åããããã®ãã°ã§ãã ãããå®è£
ããã«ã¯ããã€ãã®æ¹æ³ãããããããã¯ãã¹ãŠKubernetesã®ããã¥ã¡ã³ãã§èª¬æãããŠããŸã ã ç§èªèº«ã®çµéšã«åºã¥ããŠãElasticsearchããã³KibanaãµãŒãã¹ã®å€éšèšå®ãããã³åKubernetesäœæ¥ããŒãã§å®è¡ãããç»é²ãšãŒãžã§ã³ãã®ã¿ã䜿çšããããšã奜ã¿ãŸãã ããã«ãããå€æ°ã®ãã°ããã®ä»ã®åé¡ã«é¢é£ããéè² è·ããã¯ã©ã¹ã¿ãŒãä¿è·ãããã¯ã©ã¹ã¿ãŒãå®å
šã«æ©èœããªããªã£ãå Žåã§ããã°ãåä¿¡ã§ããããã«ãªããŸãã
Kubernetesãã¡ã³ã«ãšã£ãŠæã人æ°ã®ãããã°ã³ã¬ã¯ã·ã§ã³ã¹ã¿ãã¯ã¯ãElasticsearchãFluentdãããã³KibanaïŒEFKã¹ã¿ãã¯ïŒã§ãã ãã®äŸã§ã¯ãå€éšããŒãã§ElasticsearchãšKibanaãå®è¡ãïŒæ¢åã®ELKã¹ã¿ãã¯ã䜿çšã§ããŸãïŒãã¯ã©ã¹ã¿ãŒå
ã®Fluentdããã°åéãšãŒãžã§ã³ããšããŠåããŒãã®ããŒã¢ã³ã»ãããšããŠå®è¡ããŸãã
ElasticsearchãšKibanaãã€ã³ã¹ããŒã«ããVMã®äœæã«é¢ããéšåã¯ã¹ãããããŸãã ããã¯ããªã人æ°ã®ãããããã¯ã§ãã®ã§ãæé©ãªæ¹æ³ã«ã€ããŠã¯å€ãã®è³æãèŠã€ããããšãã§ããŸãã ããšãã°ãç§ã®èšäºã§ã¯ ã docker -compose.ymlãã¡ã€ã«ããlogstashæ§æãã©ã°ã¡ã³ããåé€ããã ãã§ãªãã elasticsearchããŒãã»ã¯ã·ã§ã³ãã127.0.0.1ãåé€ããŸãã
ãã®åŸãåäœããelasticsearchãVM-IPããŒã9200ã«æ¥ç¶ããå¿
èŠããããŸãã ã»ãã¥ãªãã£ã匷åããã«ã¯ãfluiddãšelasticsearchã®éã§loginïŒpassãŸãã¯securityããŒãèšå®ããŸãã ããããç§ã¯ãã°ãã°iptablesã«ãŒã«ã§ããããä¿è·ããŸãã
ããšã¯ãKubernetesã§fluentdããŒã¢ã³ã»ãããäœæããèšå®ã§elasticsearch ããŒãïŒããŒãå€éšã¢ãã¬ã¹ãæå®ããã ãã§ãã
ãããã yamlèšå®ã§å
¬åŒã®Kubernetesã¢ããªã³ã䜿çšããŸãã ãå°ãå€æŽããŸãïŒ
control# vi fluentd-es-ds.yaml apiVersion: v1 kind: ServiceAccount metadata: name: fluentd-es namespace: kube-system labels: k8s-app: fluentd-es kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: fluentd-es labels: k8s-app: fluentd-es kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile rules: - apiGroups: - "" resources: - "namespaces" - "pods" verbs: - "get" - "watch" - "list" --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: fluentd-es labels: k8s-app: fluentd-es kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile subjects: - kind: ServiceAccount name: fluentd-es namespace: kube-system apiGroup: "" roleRef: kind: ClusterRole name: fluentd-es apiGroup: "" --- apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd-es-v2.4.0 namespace: kube-system labels: k8s-app: fluentd-es version: v2.4.0 kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: selector: matchLabels: k8s-app: fluentd-es version: v2.4.0 template: metadata: labels: k8s-app: fluentd-es kubernetes.io/cluster-service: "true" version: v2.4.0 # This annotation ensures that fluentd does not get evicted if the node # supports critical pod annotation based priority scheme. # Note that this does not guarantee admission on the nodes (#40573). annotations: scheduler.alpha.kubernetes.io/critical-pod: '' seccomp.security.alpha.kubernetes.io/pod: 'docker/default' spec: priorityClassName: system-node-critical serviceAccountName: fluentd-es containers: - name: fluentd-es image: k8s.gcr.io/fluentd-elasticsearch:v2.4.0 env: - name: FLUENTD_ARGS value: --no-supervisor -q resources: limits: memory: 500Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true - name: config-volume mountPath: /etc/fluent/config.d terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: config-volume configMap: name: fluentd-es-config-v0.2.0
次ã«ãç¹å®ã®æ§æãfluentdã«ããŸãã
control# vi fluentd-es-configmap.yaml kind: ConfigMap apiVersion: v1 metadata: name: fluentd-es-config-v0.2.0 namespace: kube-system labels: addonmanager.kubernetes.io/mode: Reconcile data: system.conf: |- <system> root_dir /tmp/fluentd-buffers/ </system> containers.input.conf: |-
@id fluentd-containers.log @type tail path /var/log/containers/*.log pos_file /var/log/es-containers.log.pos tag raw.kubernetes.* read_from_head true <parse> @type multi_format <pattern> format json time_key time time_format %Y-%m-%dT%H:%M:%S.%NZ </pattern> <pattern> format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/ time_format %Y-%m-%dT%H:%M:%S.%N%:z </pattern> </parse>
# Detect exceptions in the log output and forward them as one log entry. <match raw.kubernetes.**> @id raw.kubernetes @type detect_exceptions remove_tag_prefix raw message log stream stream multiline_flush_interval 5 max_bytes 500000 max_lines 1000 </match> # Concatenate multi-line logs <filter **> @id filter_concat @type concat key message multiline_end_regexp /\n$/ separator "" </filter> # Enriches records with Kubernetes metadata <filter kubernetes.**> @id filter_kubernetes_metadata @type kubernetes_metadata </filter> # Fixes json fields in Elasticsearch <filter kubernetes.**> @id filter_parser @type parser key_name log reserve_data true remove_key_name_field true <parse> @type multi_format <pattern> format json </pattern> <pattern> format none </pattern> </parse> </filter> output.conf: |- <match **> @id elasticsearch @type elasticsearch @log_level info type_name _doc include_tag_key true host 192.168.1.253 port 9200 logstash_format true <buffer> @type file path /var/log/fluentd-buffers/kubernetes.system.buffer flush_mode interval retry_type exponential_backoff flush_thread_count 2 flush_interval 5s retry_forever retry_max_interval 30 chunk_limit_size 2M queue_limit_length 8 overflow_action block </buffer> </match>
æ§æã¯åºæ¬çã§ãããã¯ã€ãã¯ã¹ã¿ãŒãã«ã¯ååã§ãã ã·ã¹ãã ãšã¢ããªã±ãŒã·ã§ã³ã®ãã°ãåéããŸãã ãã£ãšè€éãªãã®ãå¿
èŠãªå Žåã¯ãfluentdãã©ã°ã€ã³ãšKubernetesã®æ§æã«é¢ããå
¬åŒããã¥ã¡ã³ããã芧ãã ããã
ããã§ã¯ãã¯ã©ã¹ã¿ãŒã«fluentdããŒã¢ã³ã»ãããäœæããŸãããã
control# kubectl create -f fluentd-es-ds.yaml serviceaccount/fluentd-es created clusterrole.rbac.authorization.k8s.io/fluentd-es created clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created daemonset.apps/fluentd-es-v2.4.0 created control# kubectl create -f fluentd-es-configmap.yaml configmap/fluentd-es-config-v0.2.0 created
æµãããã¹ãŠã®ãããããã³ãã®ä»ã®ãªãœãŒã¹ãæ£åžžã«å®è¡ãããŠããããšã確èªããŠãããKibanaãéããŸãã Kibanaã§ãfluentdããæ°ããã€ã³ããã¯ã¹ãèŠã€ããŠè¿œå ããŸãã äœããèŠã€ãããããã¹ãŠãæ£ããè¡ãããŸããããã§ãªãå Žåã¯ãåã®æé ã確èªããdaemonsetãåäœæããããconfigmapãç·šéããŸãã
ããŠãã¯ã©ã¹ã¿ãŒãããã°ãååŸããã®ã§ãããã·ã¥ããŒããäœæã§ããŸãã ãã¡ãããæ§æã¯æãåçŽãªã®ã§ãããããèªåã§æ§æãå€æŽããå¿
èŠããããŸãã äž»ãªç®æšã¯ããããã©ã®ããã«è¡ããããã瀺ãããšã§ããã
ãããŸã§ã®ãã¹ãŠã®æé ãå®äºãããšãããã«äœ¿çšã§ããéåžžã«åªããKubernetesã¯ã©ã¹ã¿ãŒãã§ããŸããã ãã¹ãã¢ããªã±ãŒã·ã§ã³ãåã蟌ã¿ãäœãèµ·ããããèŠãŠã¿ãŸãããã
ãã®äŸã§ã¯ãæ¢ã«Dockerã³ã³ãããŒãæã£ãŠããå°ããªPython / Flask Kubykã¢ããªã±ãŒã·ã§ã³ã䜿çšããŸãããã®ãããDockerãªãŒãã³ã¬ãžã¹ããªããååŸããŸãã 次ã«ããã®ã¢ããªã±ãŒã·ã§ã³ã«å€éšããŒã¿ããŒã¹ãã¡ã€ã«ãè¿œå ããŸããããã«ã¯ãæ§ææžã¿ã®GlusterFSã¹ãã¬ãŒãžã䜿çšããŸãã
æåã«ããã®ã¢ããªã±ãŒã·ã§ã³çšã®æ°ããpvcããªã¥ãŒã ãäœæããŸãïŒæ°žç¶çãªããªã¥ãŒã èŠæ±ïŒãããã§ããŠãŒã¶ãŒè³æ Œæ
å ±ã䜿çšããŠSQLiteããŒã¿ããŒã¹ãä¿åããŸãã ãã®ã¬ã€ãã®ããŒã2ã§äœææžã¿ã®ã¡ã¢ãªã¯ã©ã¹ã䜿çšã§ããŸãã
control# mkdir kubyk && cd kubyk control# vi kubyk-pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: kubyk annotations: volume.beta.kubernetes.io/storage-class: "slow" spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi control# kubectl create -f kubyk-pvc.yaml
ã¢ããªã±ãŒã·ã§ã³çšã®æ°ããPVCãäœæããããå±éã®æºåãã§ããŸããã
control# vi kubyk-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: kubyk-deployment spec: selector: matchLabels: app: kubyk replicas: 1 template: metadata: labels: app: kubyk spec: containers: - name: kubyk image: ratibor78/kubyk ports: - containerPort: 80 volumeMounts: - name: kubyk-db mountPath: /kubyk/sqlite volumes: - name: kubyk-db persistentVolumeClaim: claimName: kubyk control# vi kubyk-service.yaml apiVersion: v1 kind: Service metadata: name: kubyk spec: type: LoadBalancer selector: app: kubyk ports: - port: 80 name: http
ãããã€ãšãµãŒãã¹ãäœæããŸãããïŒ
control# kubectl create -f kubyk-deploy.yaml deployment.apps/kubyk-deployment created control# kubectl create -f kubyk-service.yaml service/kubyk created
ãµãŒãã¹ã«å²ãåœãŠãããæ°ããIPã¢ãã¬ã¹ãšãµãã®ã¹ããŒã¿ã¹ã確èªããŸãã
control# kubectl get po NAME READY STATUS RESTARTS AGE glusterfs-2wxk7 1/1 Running 1 2d1h glusterfs-5dtdj 1/1 Running 1 41d glusterfs-zqxwt 1/1 Running 0 2d1h heketi-b8c5f6554-f92rn 1/1 Running 0 8d kubyk-deployment-75d5447d46-jrttp 1/1 Running 0 11s control# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ... some text.. kubyk LoadBalancer 10.109.105.224 192.168.0.242 80:32384/TCP 10s
ãããã£ãŠãæ°ããã¢ããªã±ãŒã·ã§ã³ãæ£åžžã«èµ·åããããã§ãã ãã©ãŠã¶ã§IPã¢ãã¬ã¹http://192.168.0.242ãéããšããã®ã¢ããªã±ãŒã·ã§ã³ã®ãã°ã€ã³ããŒãžã衚瀺ãããŸãã admin / adminã®è³æ Œæ
å ±ã䜿çšããŠãã°ã€ã³ã§ããŸããããã®æ®µéã§ãã°ã€ã³ããããšãããšããŸã 䜿çšå¯èœãªããŒã¿ããŒã¹ããªããããšã©ãŒã衚瀺ãããŸãã
Kubernetesããã·ã¥ããŒãã®å²çè£ããã®ãã°ã¡ãã»ãŒãžã®äŸã次ã«ç€ºããŸãã
ãããä¿®æ£ããã«ã¯ãgitãªããžããªãã以åã«äœæããpvcããªã¥ãŒã ã«SQlite DBãã¡ã€ã«ãã³ããŒããå¿
èŠããããŸãã ã¢ããªã±ãŒã·ã§ã³ã¯ãã®ããŒã¿ããŒã¹ã®äœ¿çšãéå§ããŸãã
control# git pull https://github.com/ratibor78/kubyk.git control# kubectl cp ./kubyk/sqlite/database.db kubyk-deployment-75d5447d46-jrttp:/kubyk/sqlite
ãã®ãã¡ã€ã«ãããªã¥ãŒã ã«ã³ããŒããã«ã¯ãã¢ããªã±ãŒã·ã§ã³ã®underãškubectl cpã³ãã³ãã䜿çšããŸãã
ãŸããnginxãŠãŒã¶ãŒã«ãã®ãã£ã¬ã¯ããªãžã®æžã蟌ã¿ã¢ã¯ã»ã¹ãèš±å¯ããå¿
èŠããããŸãã ã¢ããªã±ãŒã·ã§ã³ã¯ã supervisordã䜿çšããŠnginxãŠãŒã¶ãŒããèµ·åãããŸãã
control# kubectl exec -ti kubyk-deployment-75d5447d46-jrttp -- chown -R nginx:nginx /kubyk/sqlite/
ããäžåºŠãã°ã€ã³ããŠã¿ãŸãããã
ããã§ãã¢ããªã±ãŒã·ã§ã³ãæ£åžžã«åäœããããã«ãªããŸãããããšãã°ã1ã€ã®äœæ¥ããŒãã«ã¢ããªã±ãŒã·ã§ã³ã®ã³ããŒã1ã€é
眮ããããã«ãkubykã®å±éã3ã€ã®ã¬ããªã«ã«æ¡åŒµã§ããŸãã 以åã«pvcããªã¥ãŒã ãäœæãããããã¢ããªã±ãŒã·ã§ã³ã¬ããªã«ãæã€ãã¹ãŠã®ãããã¯åãããŒã¿ããŒã¹ã䜿çšããããããµãŒãã¹ã¯ã¬ããªã«éã§ãã©ãã£ãã¯ã埪ç°çã«åæ£ããŸãã
control# kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE heketi 1/1 1 1 39d kubyk-deployment 1/1 1 1 4h5m control# kubectl scale deployments kubyk-deployment --replicas=3 deployment.extensions/kubyk-deployment scaled control# kubectl get po NAME READY STATUS RESTARTS AGE glusterfs-2wxk7 1/1 Running 1 2d5h glusterfs-5dtdj 1/1 Running 21 41d glusterfs-zqxwt 1/1 Running 0 2d5h heketi-b8c5f6554-f92rn 1/1 Running 0 8d kubyk-deployment-75d5447d46-bdnqx 1/1 Running 0 26s kubyk-deployment-75d5447d46-jrttp 1/1 Running 0 4h7m kubyk-deployment-75d5447d46-wz9xz 1/1 Running 0 26s
ããã§ãäœæ¥ããŒãããšã«ã¢ããªã±ãŒã·ã§ã³ã®ã¬ããªã«ãã§ããã®ã§ãããŒãã倱ãããŠãã¢ããªã±ãŒã·ã§ã³ã¯åäœãåæ¢ããŸããã ããã«ãå
ã»ã©èšã£ãããã«ãè² è·ãç°¡åã«åæ£ã§ããŸãã å§ããã®ã«æªãå Žæã§ã¯ãããŸããã
ã¢ããªã±ãŒã·ã§ã³ã§æ°ãããŠãŒã¶ãŒãäœæããŸãããã
æ°ãããªã¯ãšã¹ãã¯ãã¹ãŠããªã¹ãã®æ¬¡ã®å²çè£ã§åŠçãããŸãã ããã¯ãå²çè£ã®ãã°ã§ç¢ºèªã§ããŸãã ããšãã°ã1ã€ã®ãµãã§ã¢ããªã±ãŒã·ã§ã³ã«ãã£ãŠæ°ãããŠãŒã¶ãŒãäœæãããåŸã次ã®ãµãã次ã®ãªã¯ãšã¹ãã«å¿çããŸãã ãã®ã¢ããªã±ãŒã·ã§ã³ã¯1ã€ã®æ°žç¶çãªããªã¥ãŒã ã䜿çšããŠããŒã¿ããŒã¹ãä¿åããããããã¹ãŠã®ã¬ããªã«ã倱ãããŠãããã¹ãŠã®ããŒã¿ã¯å®å
šã§ãã
倧èŠæš¡ã§è€éãªã¢ããªã±ãŒã·ã§ã³ã§ã¯ãããŒã¿ããŒã¹ã«æå®ãããããªã¥ãŒã ã ãã§ãªããæ°žç¶çãªæ
å ±ãä»ã®å€ãã®èŠçŽ ãå容ããããã®ããŸããŸãªããªã¥ãŒã ãå¿
èŠã«ãªããŸãã
ãŸããã»ãŒå®äºã§ãã Kubernetesã¯èšå€§ã§åçãªãããã¯ã§ãããããããã«å€ãã®åŽé¢ãè¿œå ã§ããŸãããããã§åæ¢ããŸãã ãã®äžé£ã®èšäºã®äž»ãªç®çã¯ãç¬èªã®Kubernetesã¯ã©ã¹ã¿ãŒãäœæããæ¹æ³ã瀺ãããšã§ããããã®æ
å ±ãã圹ã«ç«ãŠã°å¹žãã§ãã
PS
ãã¡ãããå®å®æ§ãã¹ããšã¹ãã¬ã¹ãã¹ãã
ãã®äŸã®ã¯ã©ã¹ã¿ãŒå³ã¯ã2ã€ã®äœæ¥ããŒãã1ã€ã®ãã¹ã¿ãŒããŒãã1ã€ã®etcdããŒããªãã§æ©èœããŸãã å¿
èŠã«å¿ããŠãããããç¡å¹ã«ãããã¹ãã¢ããªã±ãŒã·ã§ã³ãæ©èœãããã©ããã確èªããŸãã
ãããã®ã¬ã€ããã³ã³ãã€ã«ããéã«ãã»ãŒåæ§ã®æ¹æ³ã§ãæ¬çªçšã®æ¬çªã¯ã©ã¹ã¿ãŒãæºåããŸããã ã¯ã©ã¹ã¿ãŒãäœæããããã«ã¢ããªã±ãŒã·ã§ã³ããããã€ãããšãé倧ãªé»æºé害ãçºçããŸããã ã¯ã©ã¹ã¿ãŒã®ãã¹ãŠã®ãµãŒããŒãå®å
šã«åæãããŸãã-ã·ã¹ãã 管çè
ã®æŽ»çºãªæªå€¢ã äžéšã®ãµãŒããŒãé·æéã·ã£ããããŠã³ããåŸããã¡ã€ã«ã·ã¹ãã ãšã©ãŒãçºçããŸããã ããããåèµ·åã¯éåžžã«é©ããŸãããKubernetesã¯ã©ã¹ã¿ãŒã¯å®å
šã«å埩ããŸããã ãã¹ãŠã®GlusterFSããªã¥ãŒã ãšå±éãéå§ãããŸããã ç§ã«ãšã£ãŠãããã¯ãã®æè¡ã®å€§ããªå¯èœæ§ã瀺ããŠããŸãã
ãããããé¡ãããŸãããŸããäŒãããŸãããïŒ